
Sign up to save your podcasts
Or


Researchers propose a new optimizer, LOMO, that reduces memory usage to enable full parameter fine-tuning of large language models on a single machine.
https://arxiv.org/abs//2306.09782
By Igor Melnyk5
33 ratings
Researchers propose a new optimizer, LOMO, that reduces memory usage to enable full parameter fine-tuning of large language models on a single machine.
https://arxiv.org/abs//2306.09782

977 Listeners

1,993 Listeners

443 Listeners

113,121 Listeners

10,254 Listeners

5,576 Listeners

221 Listeners

51 Listeners

101 Listeners

475 Listeners