
Sign up to save your podcasts
Or


Researchers propose a new optimizer, LOMO, that reduces memory usage to enable full parameter fine-tuning of large language models on a single machine.
https://arxiv.org/abs//2306.09782
By Igor Melnyk5
33 ratings
Researchers propose a new optimizer, LOMO, that reduces memory usage to enable full parameter fine-tuning of large language models on a single machine.
https://arxiv.org/abs//2306.09782

958 Listeners

1,977 Listeners

438 Listeners

112,877 Listeners

10,073 Listeners

5,535 Listeners

215 Listeners

51 Listeners

98 Listeners

473 Listeners