Sin descripción

Aleksandr Borzunov 748f20991c Make runners executable and improve their shebangs hace 3 años
lib 64dee420da Upgrade to using hivemind.optim.experimental hace 3 años
.gitignore 72fc0bcdb7 Initial commit (ru-max branch without private code) hace 3 años
README.md d4140807f4 Update readme hace 3 años
arguments.py 19e3d2d060 Make aux peers fetch checkpoints every 2 steps by default hace 3 años
callback.py 4918c58cb6 Polish stdout hace 3 años
data.py c61c61b20d Use t5-small tokenizer hace 3 años
huggingface_auth.py 3e604bc1f5 fix auth hace 3 años
manage_scaleset.py c365b2ec9f Tweak settings for the upcoming demo (#2) hace 3 años
requirements.txt 3db71b9de5 Fix transformers version hace 3 años
run_aux_peer.py 748f20991c Make runners executable and improve their shebangs hace 3 años
run_trainer.py 748f20991c Make runners executable and improve their shebangs hace 3 años
run_trainer_tpu.py 748f20991c Make runners executable and improve their shebangs hace 3 años
task.py 09240991cc Make model uploading use access token from authorizer (#7) hace 3 años
utils.py f621362466 Make logging less verbose hace 3 años

README.md

Training DALL-E with volunteers from all over the Internet

This repository is a part of the NeurIPS 2021 demonstration "Training Transformers Together".

In this demo, we train a model similar to OpenAI DALL-E — a Transformer "language model" that generates images from text descriptions. Training happens collaboratively — volunteers from all over the Internet contribute to the training using hardware available to them. We use LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on the dalle‑pytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.

See details about how to join and how it works on our website.