|
|
il y a 2 ans | |
|---|---|---|
| inference | il y a 3 ans | |
| lib | il y a 4 ans | |
| .gitignore | il y a 4 ans | |
| LICENSE | il y a 2 ans | |
| README.md | il y a 4 ans | |
| arguments.py | il y a 4 ans | |
| callback.py | il y a 4 ans | |
| data.py | il y a 4 ans | |
| huggingface_auth.py | il y a 4 ans | |
| manage_scaleset.py | il y a 4 ans | |
| requirements.txt | il y a 3 ans | |
| run_aux_peer.py | il y a 4 ans | |
| run_trainer.py | il y a 4 ans | |
| run_trainer_tpu.py | il y a 4 ans | |
| task.py | il y a 4 ans | |
| utils.py | il y a 4 ans |
This repository is a part of the NeurIPS 2021 demonstration "Training Transformers Together".
In this demo, we train a model similar to OpenAI DALL-E — a Transformer "language model" that generates images from text descriptions. Training happens collaboratively — volunteers from all over the Internet contribute to the training using hardware available to them. We use LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on the dalle‑pytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.
See details about how to join and how it works on our website.