
Sign up to save your podcasts
Or


In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning
By Francesco Gadaleta4.2
7272 ratings
In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?
TL;DR;
LLM Generalisation without hallucinations. Is that possible?
References
https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf
https://www.lamini.ai/blog/lamini-memory-tuning

31,971 Listeners

7,575 Listeners

1,706 Listeners

1,091 Listeners

623 Listeners

585 Listeners

823 Listeners

301 Listeners

99 Listeners

9,161 Listeners

207 Listeners

306 Listeners

5,512 Listeners

228 Listeners

1,104 Listeners