
Sign up to save your podcasts
Or


In this episode, Shashank Rajput, Research Scientist at Mosaic and Databricks, explores innovative approaches in large language models (LLMs), with a focus on Retrieval Augmented Generation (RAG) and its impact on improving efficiency and reducing operational costs.
Highlights include:
- How RAG enhances LLM accuracy by incorporating relevant external documents.
- The evolution of attention mechanisms, including mixed attention strategies.
- Practical applications of Mamba architectures and their trade-offs with traditional transformers.
By Databricks4.8
2020 ratings
In this episode, Shashank Rajput, Research Scientist at Mosaic and Databricks, explores innovative approaches in large language models (LLMs), with a focus on Retrieval Augmented Generation (RAG) and its impact on improving efficiency and reducing operational costs.
Highlights include:
- How RAG enhances LLM accuracy by incorporating relevant external documents.
- The evolution of attention mechanisms, including mixed attention strategies.
- Practical applications of Mamba architectures and their trade-offs with traditional transformers.

390 Listeners

26,330 Listeners

9,539 Listeners

479 Listeners

625 Listeners

302 Listeners

226 Listeners

269 Listeners

2,548 Listeners

9,927 Listeners

1,569 Listeners

511 Listeners

676 Listeners

3,539 Listeners

35 Listeners