|
@@ -9,7 +9,7 @@ Distributed training of large neural networks across volunteer computers.
|
|
|
|
|
|
**[WIP]** - this branch is a work in progress. If you're interested in
|
|
|
supplementary code for [Learning@home paper](https://arxiv.org/abs/2002.04013),
|
|
|
-you can find it at https://github.com/mryab/learning-at-home .
|
|
|
+you can find it at https://github.com/mryab/learning-at-home.
|
|
|
|
|
|
## What do I need to run it?
|
|
|
|
|
@@ -61,4 +61,4 @@ do something complex with it, please contact us by opening an issue (less prefer
|
|
|
* You can achieve 4x less network load by passing quantized uint8 activations across experts.
|
|
|
Implement your own quantization or wait for tesseract v0.8.
|
|
|
* Currently runtime can form batches that exceed maximal batch_size by task_size - 1.
|
|
|
- We will fix that in the nearest patch.
|
|
|
+ We will fix that in the nearest patch.
|