|
@@ -8,14 +8,14 @@
|
|
<br>
|
|
<br>
|
|
</p>
|
|
</p>
|
|
|
|
|
|
-Generate text with distributed **Llama 2** (70B), **Falcon** (40B+), **BLOOM** (176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab:
|
|
|
|
|
|
+Generate text with distributed **Llama 3.1** (up to 405B), **Mixtral** (8x22B), **Falcon** (40B+) or **BLOOM** (176B) and fine‑tune them for your own tasks — right from your desktop computer or Google Colab:
|
|
|
|
|
|
```python
|
|
```python
|
|
from transformers import AutoTokenizer
|
|
from transformers import AutoTokenizer
|
|
from petals import AutoDistributedModelForCausalLM
|
|
from petals import AutoDistributedModelForCausalLM
|
|
|
|
|
|
# Choose any model available at https://health.petals.dev
|
|
# Choose any model available at https://health.petals.dev
|
|
-model_name = "petals-team/StableBeluga2" # This one is fine-tuned Llama 2 (70B)
|
|
|
|
|
|
+model_name = "meta-llama/Meta-Llama-3.1-405B-Instruct"
|
|
|
|
|
|
# Connect to a distributed network hosting model layers
|
|
# Connect to a distributed network hosting model layers
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
@@ -31,22 +31,26 @@ print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
|
|
🚀 <b><a href="https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing">Try now in Colab</a></b>
|
|
🚀 <b><a href="https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing">Try now in Colab</a></b>
|
|
</p>
|
|
</p>
|
|
|
|
|
|
-🔏 **Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust.
|
|
|
|
|
|
+🦙 **Want to run Llama?** [Request access](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to its weights, then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev).
|
|
|
|
|
|
-🦙 **Want to run Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and 🤗 [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev).
|
|
|
|
|
|
+🔏 **Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust.
|
|
|
|
|
|
💬 **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)!
|
|
💬 **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)!
|
|
|
|
|
|
## Connect your GPU and increase Petals capacity
|
|
## Connect your GPU and increase Petals capacity
|
|
|
|
|
|
-Petals is a community-run system — we rely on people sharing their GPUs. You can check out [available models](https://health.petals.dev) and help serving one of them! As an example, here is how to host a part of [Stable Beluga 2](https://huggingface.co/stabilityai/StableBeluga2) on your GPU:
|
|
|
|
|
|
+Petals is a community-run system — we rely on people sharing their GPUs. You can help serving one of the [available models](https://health.petals.dev) or host a new model from 🤗 [Model Hub](https://huggingface.co/models)!
|
|
|
|
+
|
|
|
|
+As an example, here is how to host a part of [Llama 3.1 (405B) Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) on your GPU:
|
|
|
|
+
|
|
|
|
+🦙 **Want to host Llama?** [Request access](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to its weights, then run `huggingface-cli login` in the terminal before loading the model.
|
|
|
|
|
|
🐧 **Linux + Anaconda.** Run these commands for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD):
|
|
🐧 **Linux + Anaconda.** Run these commands for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD):
|
|
|
|
|
|
```bash
|
|
```bash
|
|
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
|
|
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
|
|
pip install git+https://github.com/bigscience-workshop/petals
|
|
pip install git+https://github.com/bigscience-workshop/petals
|
|
-python -m petals.cli.run_server petals-team/StableBeluga2
|
|
|
|
|
|
+python -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
|
|
```
|
|
```
|
|
|
|
|
|
🪟 **Windows + WSL.** Follow [this guide](https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows) on our Wiki.
|
|
🪟 **Windows + WSL.** Follow [this guide](https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows) on our Wiki.
|
|
@@ -56,7 +60,7 @@ python -m petals.cli.run_server petals-team/StableBeluga2
|
|
```bash
|
|
```bash
|
|
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
|
|
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
|
|
learningathome/petals:main \
|
|
learningathome/petals:main \
|
|
- python -m petals.cli.run_server --port 31330 petals-team/StableBeluga2
|
|
|
|
|
|
+ python -m petals.cli.run_server --port 31330 meta-llama/Meta-Llama-3.1-405B-Instruct
|
|
```
|
|
```
|
|
|
|
|
|
🍏 **macOS + Apple M1/M2 GPU.** Install [Homebrew](https://brew.sh/), then run these commands:
|
|
🍏 **macOS + Apple M1/M2 GPU.** Install [Homebrew](https://brew.sh/), then run these commands:
|
|
@@ -64,19 +68,17 @@ sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cach
|
|
```bash
|
|
```bash
|
|
brew install python
|
|
brew install python
|
|
python3 -m pip install git+https://github.com/bigscience-workshop/petals
|
|
python3 -m pip install git+https://github.com/bigscience-workshop/petals
|
|
-python3 -m petals.cli.run_server petals-team/StableBeluga2
|
|
|
|
|
|
+python3 -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
|
|
```
|
|
```
|
|
|
|
|
|
<p align="center">
|
|
<p align="center">
|
|
📚 <b><a href="https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#running-a-server">Learn more</a></b> (how to use multiple GPUs, start the server on boot, etc.)
|
|
📚 <b><a href="https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#running-a-server">Learn more</a></b> (how to use multiple GPUs, start the server on boot, etc.)
|
|
</p>
|
|
</p>
|
|
|
|
|
|
-💬 **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)!
|
|
|
|
-
|
|
|
|
-🦙 **Want to host Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and 🤗 [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), generate an 🔑 [access token](https://huggingface.co/settings/tokens), then add `--token YOUR_TOKEN_HERE` to the `python -m petals.cli.run_server` command.
|
|
|
|
-
|
|
|
|
🔒 **Security.** Hosting a server does not allow others to run custom code on your computer. Learn more [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety).
|
|
🔒 **Security.** Hosting a server does not allow others to run custom code on your computer. Learn more [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety).
|
|
|
|
|
|
|
|
+💬 **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)!
|
|
|
|
+
|
|
🏆 **Thank you!** Once you load and host 10+ blocks, we can show your name or link on the [swarm monitor](https://health.petals.dev) as a way to say thanks. You can specify them with `--public_name YOUR_NAME`.
|
|
🏆 **Thank you!** Once you load and host 10+ blocks, we can show your name or link on the [swarm monitor](https://health.petals.dev) as a way to say thanks. You can specify them with `--public_name YOUR_NAME`.
|
|
|
|
|
|
## How does it work?
|
|
## How does it work?
|
|
@@ -120,22 +122,39 @@ Please see **Section 3.3** of our [paper](https://arxiv.org/pdf/2209.01188.pdf).
|
|
|
|
|
|
Please see our [FAQ](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#contributing) on contributing.
|
|
Please see our [FAQ](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#contributing) on contributing.
|
|
|
|
|
|
-### 📜 Citation
|
|
|
|
|
|
+### 📜 Citations
|
|
|
|
|
|
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel.
|
|
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel.
|
|
[Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188)
|
|
[Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188)
|
|
-_arXiv preprint arXiv:2209.01188,_ 2022.
|
|
|
|
|
|
+_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)._ 2023.
|
|
|
|
|
|
```bibtex
|
|
```bibtex
|
|
-@article{borzunov2022petals,
|
|
|
|
|
|
+@inproceedings{borzunov2023petals,
|
|
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
|
|
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
|
|
- author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
|
|
|
|
- journal = {arXiv preprint arXiv:2209.01188},
|
|
|
|
- year = {2022},
|
|
|
|
|
|
+ author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Riabinin, Maksim and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
|
|
|
|
+ booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
|
|
|
|
+ pages = {558--568},
|
|
|
|
+ year = {2023},
|
|
url = {https://arxiv.org/abs/2209.01188}
|
|
url = {https://arxiv.org/abs/2209.01188}
|
|
}
|
|
}
|
|
```
|
|
```
|
|
|
|
|
|
|
|
+Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, and Colin Raffel.
|
|
|
|
+[Distributed inference and fine-tuning of large language models over the Internet.](https://arxiv.org/abs/2312.08361)
|
|
|
|
+_Advances in Neural Information Processing Systems_ 36 (2023).
|
|
|
|
+
|
|
|
|
+```bibtex
|
|
|
|
+@inproceedings{borzunov2023distributed,
|
|
|
|
+ title = {Distributed inference and fine-tuning of large language models over the {I}nternet},
|
|
|
|
+ author = {Borzunov, Alexander and Ryabinin, Max and Chumachenko, Artem and Baranchuk, Dmitry and Dettmers, Tim and Belkada, Younes and Samygin, Pavel and Raffel, Colin},
|
|
|
|
+ booktitle = {Advances in Neural Information Processing Systems},
|
|
|
|
+ volume = {36},
|
|
|
|
+ pages = {12312--12331},
|
|
|
|
+ year = {2023},
|
|
|
|
+ url = {https://arxiv.org/abs/2312.08361}
|
|
|
|
+}
|
|
|
|
+```
|
|
|
|
+
|
|
--------------------------------------------------------------------------------
|
|
--------------------------------------------------------------------------------
|
|
|
|
|
|
<p align="center">
|
|
<p align="center">
|