
Sign up to save your podcasts
Or
LLaMA-1 is a collection of large language models ranging from 7B to 65B parameters, trained on publicly available datasets. LLaMA models achieve competitive performance compared to other LLMs like GPT-3, Chinchilla, and PaLM, with the 13B model outperforming GPT-3 on most benchmarks, despite being much smaller, and the 65B model being competitive with the best large language models. The document also discusses the training approach, architecture, optimization, and evaluations of LLaMA on common sense reasoning, question answering, reading comprehension, mathematical reasoning, code generation, and massive multitask language understanding, as well as its biases and toxicity. The models are intended to democratize access and study of LLMs with some models being able to run on a single GPU, and to be a basis for further research.
5
22 ratings
LLaMA-1 is a collection of large language models ranging from 7B to 65B parameters, trained on publicly available datasets. LLaMA models achieve competitive performance compared to other LLMs like GPT-3, Chinchilla, and PaLM, with the 13B model outperforming GPT-3 on most benchmarks, despite being much smaller, and the 65B model being competitive with the best large language models. The document also discusses the training approach, architecture, optimization, and evaluations of LLaMA on common sense reasoning, question answering, reading comprehension, mathematical reasoning, code generation, and massive multitask language understanding, as well as its biases and toxicity. The models are intended to democratize access and study of LLMs with some models being able to run on a single GPU, and to be a basis for further research.
272 Listeners
441 Listeners
298 Listeners
331 Listeners
217 Listeners
156 Listeners
192 Listeners
9,170 Listeners
409 Listeners
121 Listeners
75 Listeners
479 Listeners
94 Listeners
31 Listeners
43 Listeners