# Credits We kindly thank (in random order) * [Artem Babenko](https://research.yandex.com/people/102794) and [Vladimir Aliev](https://ru.linkedin.com/in/vladimir-aliev-19b93282) for helpful discussions and editorial review of the paper, * [Jacob R. Steeves](https://github.com/unconst) for discussions on RPC frameworks and NAT traversal and peer-to-peer technologies. * [Dmitry Afanasiev](https://www.linkedin.com/in/dmitry-afanasiev-295a231/) for his guidance on networking and communication technologies, * [Lidi Zheng](https://github.com/lidizheng) and grpc-aio contributors for their awesome framework and [this PR](https://github.com/grpc/grpc/pull/23265) * [Brian Muller](https://github.com/bmuller/kademlia) for his implementations of [kademlia](https://github.com/bmuller/kademlia) and [rpcudp](https://github.com/bmuller/rpcudp) * Alexander Sherbakov for helpful discussions on PC and server component architecture, * Our early adopters, [contributors](https://github.com/learning-at-home/hivemind/graphs/contributors), and reviewers # Related projects We also want to reference several projects that have similar ideas in mind: * [BitTensor](https://github.com/opentensor/BitTensor) — a decentralized deep learning ecosystem with incentive mechanism. Like hivemind, but peers are getting rewarded for their contribution to other peers. . * [GShard](https://arxiv.org/abs/2006.16668) — a paper by Dmitry Lepikhin et al. that demonstrate the effectiveness of huge Mixture-of-Experts models on conventional hpc hardware. Those guys train models 4 times the size of GPT-3 on thousands of TPUv3. * Also doing research in decentralized deep learning? Let us know!