
Sign up to save your podcasts
Or


In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning
By Francesco Gadaleta4.2
7272 ratings
In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning

4,027 Listeners

26,384 Listeners

753 Listeners

628 Listeners

12,133 Listeners

6,463 Listeners

305 Listeners

113,307 Listeners

56,974 Listeners

15 Listeners

4,027 Listeners

8,037 Listeners

209 Listeners

6,466 Listeners

16,508 Listeners