Bittensor’s Subnet 3 Trains 72B AI Model on Decentralized Network
TLDR: Covenant-72B scored 67.1 on MMLU zero-shot, beating LLaMA-2-70B’s 65.6 under identical test conditions. SparseLoCo reduced communication overhead by 146x using sparsification, 2-bit quantization, and error feedback across nodes. Gauntlet scored every node’s contribution via loss evaluation and OpenSkill ranking, all recorded on the blockchain. $TAO rose 14% to $236 post-announcement, with Grayscale expanding its TAO trust for institutional investor access. Bittensor’s Subnet 3 has trained a 72-billion-parameter AI model without a central data center. The model, named Covenant-72B, was built across more than 70 global participants. All nodes are connected through a standard home internet. Covenant-72B outperformed Meta’s LLaMA-2-70B on the MMLU benchmark, scoring 67.1 against 65.6. The test ran under identical zero-shot conditions. This outcome challenges long-standing assumptions about what decentralized compute can achieve. Two Technical Innovations Drove the Decentralized Training For years, AI crypto projects claimed decentralized compute could match centralized labs. Bittensor’s Subnet 3 now backs that claim with measurable results. The training covered 1.1 trillion tokens across more than 70 nodes worldwide. Every node ran on 500 Mb/s commodity internet connections. Two core innovations made this scale of training possible. SparseLoCo cut communication overhead by 146 times throughout the process. It combined top-k sparsification, 2-bit quantization, and error feedback to keep all nodes in sync. No central server was needed to manage coordination across the network. The second innovation, Gauntlet, handled trust and contribution scoring during training. It assessed each node through loss evaluation and OpenSkill ranking. All scores were logged on the blockchain for full transparency. This gave every participant a verifiable record of their contribution. Milk Road reported on the outcome via social media, noting that distributed networks can now train l...
Comments
Log in to comment