
Sign up to save your podcasts
Or
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs.
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
5
22 ratings
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs.
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
223,325 Listeners
764 Listeners