
Sign up to save your podcasts
Or


In this episode, Shashank Rajput, Research Scientist at Mosaic and Databricks, explores innovative approaches in large language models (LLMs), with a focus on Retrieval Augmented Generation (RAG) and its impact on improving efficiency and reducing operational costs.
Highlights include:
- How RAG enhances LLM accuracy by incorporating relevant external documents.
- The evolution of attention mechanisms, including mixed attention strategies.
- Practical applications of Mamba architectures and their trade-offs with traditional transformers.
By Databricks4.8
2020 ratings
In this episode, Shashank Rajput, Research Scientist at Mosaic and Databricks, explores innovative approaches in large language models (LLMs), with a focus on Retrieval Augmented Generation (RAG) and its impact on improving efficiency and reducing operational costs.
Highlights include:
- How RAG enhances LLM accuracy by incorporating relevant external documents.
- The evolution of attention mechanisms, including mixed attention strategies.
- Practical applications of Mamba architectures and their trade-offs with traditional transformers.

400 Listeners

26,353 Listeners

9,748 Listeners

476 Listeners

623 Listeners

300 Listeners

227 Listeners

269 Listeners

2,549 Listeners

10,264 Listeners

1,581 Listeners

530 Listeners

667 Listeners

3,494 Listeners

34 Listeners