Browse Source

Update README.md

UARTman 5 years ago
parent
commit
6451f34cee
1 changed files with 2 additions and 2 deletions
  1. 2 2
      README.md

+ 2 - 2
README.md

@@ -9,7 +9,7 @@ Distributed training of large neural networks across volunteer computers.
 
 
 **[WIP]** - this branch is a work in progress. If you're interested in
 **[WIP]** - this branch is a work in progress. If you're interested in
 supplementary code for [Learning@home paper](https://arxiv.org/abs/2002.04013),
 supplementary code for [Learning@home paper](https://arxiv.org/abs/2002.04013),
-you can find it at https://github.com/mryab/learning-at-home .
+you can find it at https://github.com/mryab/learning-at-home.
 
 
 ## What do I need to run it?
 ## What do I need to run it?
 
 
@@ -61,4 +61,4 @@ do something complex with it, please contact us by opening an issue (less prefer
 * You can achieve 4x less network load by passing quantized uint8 activations across experts.
 * You can achieve 4x less network load by passing quantized uint8 activations across experts.
     Implement your own quantization or wait for tesseract v0.8.
     Implement your own quantization or wait for tesseract v0.8.
 * Currently runtime can form batches that exceed maximal batch_size by task_size - 1. 
 * Currently runtime can form batches that exceed maximal batch_size by task_size - 1. 
-    We will fix that in the nearest patch.
+    We will fix that in the nearest patch.