Large Language Model (LLM) Talk

RLHF (Reinforcement Learning from Human Feedback)


Listen Later

Reinforcement Learning from Human Feedback (RLHF) incorporates human preferences into AI systems, addressing problems where specifying a clear reward function is difficult. The basic pipeline involves training a language model, collecting human preference data to train a reward model, and optimizing the language model with an RL optimizer using the reward model. Techniques like KL divergence are used for regularization to prevent over-optimization. RLHF is a subset of preference fine-tuning techniques. It has become a crucial technique in post-training to align language models with human values and elicit desirable behaviors.

...more
View all episodesView all episodes
Download on the App Store

Large Language Model (LLM) TalkBy AI-Talk

  • 4
  • 4
  • 4
  • 4
  • 4

4

4 ratings


More shows like Large Language Model (LLM) Talk

View all
Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,584 Listeners

Learning English from the News by BBC Radio

Learning English from the News

264 Listeners

Thinking in English by Thomas Wilkinson

Thinking in English

110 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

3 Listeners