
Sign up to save your podcasts
Or


In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning
By Francesco Gadaleta4.2
7171 ratings
In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning

1,283 Listeners

392 Listeners

477 Listeners

625 Listeners

1,827 Listeners

285 Listeners

301 Listeners

341 Listeners

146 Listeners

268 Listeners

210 Listeners

89 Listeners

97 Listeners

209 Listeners

558 Listeners