Alexander Borzunov
|
90fbaab61e
Fix Docker build by avoiding Python 3.11 (#348)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
43acfe52a7
Import petals.utils.peft only when needed to avoid unnecessary import of bitsandbytes (#345)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
294970fe18
Update Colab link
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
515a5120cb
Mention LLaMA in readme (#344)
|
%!s(int64=2) %!d(string=hai) anos |
Max Ryabinin
|
13f4e3a88a
Fix convergence issues and switch to LLaMA in the SST-2 example (#343)
|
%!s(int64=2) %!d(string=hai) anos |
Artem Chumachenko
|
b9f0a5467f
Support peft LoRA adapters (#335)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
dfc6578c8e
Use bitsandbytes 0.40.0.post4 with bias hotfix (#342)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
b28f5016ea
Delete deprecated petals.cli scripts (#336)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
fa095f6461
Use 4-bit for llama by default, use bitsandbytes 0.40.0.post3 (#340)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
158013a671
Implement direct server-to-server communication (#331)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
4d9c26fe5c
Allow free_disk_space_for() remove arbitrary files from Petals cache (#339)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
de930918a0
Support loading blocks in 4-bit (QLoRA NF4 format, disabled by default) (#333)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
66a47c763e
Require pydantic < 2.0 (2.0 is incompatible with hivemind 1.1.8) (#337)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
10c72acdf4
Fix warmup steps and minor issues in benchmarks (#334)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
d126ee3053
Add benchmark scripts (#319)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
fecee8c4dc
Show license links when loading models (#332)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
47a2b1ee65
Fix llama's lm_head.weight.requires_grad (#330)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
7a37513f77
Add AutoDistributed{Model, ModelForCausalLM, ModelForSequenceClassification} (#329)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
cb3f018f9f
Add LLaMA support (#323)
|
%!s(int64=2) %!d(string=hai) anos |
Max Ryabinin
|
5c0733711a
Use number of tokens for attn_cache_size (#286)
|
%!s(int64=2) %!d(string=hai) anos |
Max Ryabinin
|
c839173e57
Determine block dtype in a unified manner (#325)
|
%!s(int64=2) %!d(string=hai) anos |
Max Ryabinin
|
3e7ae5116d
Remove unused imports and attributes (#324)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
675bacb592
Bump version to 1.1.5 (#312)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
e026952338
Abort speedtest if it runs too long (#316)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
6eb306a605
Raise error for unexpected .generate() kwargs (#315)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
d9e7bfc949
Divide compute throughput by average no. of used blocks (#314)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
6137b1b4b0
Replace .make_sequence(..., mode="random") with mode="max_throughput" (#313)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
0a313bf6c5
Update hivemind to 1.1.8, enable efficient bfloat16 encoding (#311)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
8f6342a861
Refactor RemoteSequenceManager (#309)
|
%!s(int64=2) %!d(string=hai) anos |
Alexander Borzunov
|
454c193863
Fix OOMs happening in case of accelerate >= 0.16.0 (#310)
|
%!s(int64=2) %!d(string=hai) anos |