瀏覽代碼

Update hivemind/optim/experimental/optimizer.py

Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
justheuristic 3 年之前
父節點
當前提交
7c566e985b
共有 1 個文件被更改,包括 1 次插入1 次删除
  1. 1 1
      hivemind/optim/experimental/optimizer.py

+ 1 - 1
hivemind/optim/experimental/optimizer.py

@@ -63,7 +63,7 @@ class Optimizer(torch.optim.Optimizer):
     :note: If you are using ColloptaborativeOptimizer with lr_scheduler, it is recommended to pass this scheduler
       explicitly into this class. Otherwise, scheduler may not be synchronized between peers.
 
-    :param matchmaking_time: when looking for group, wait for peers to join for up to this many secodns
+    :param matchmaking_time: when looking for group, wait for peers to join for up to this many seconds
     :param averaging_timeout: if an averaging step hangs for this long, it will be cancelled.
     :param load_state_timeout: wait for at most this many seconds before giving up on load_state_from_peers
     :param reuse_grad_buffers: if True, use model's .grad buffers for gradient accumulation.