
Sign up to save your podcasts
Or


In this episode, we demystify how researchers teach AI models to behave helpfully and safely using Reinforcement Learning from Human Feedback (RLHF). We discuss why even very large models can generate undesired outputs and how RLHF addresses this by incorporating human preferences. You’ll learn how methods like InstructGPT were trained: first by gathering human-written demonstration responses, then by having humans rank model outputs to train a reward model, and finally using reinforcement learning (e.g. with PPO) to fine-tune the model so that it better aligns with what users want. We also talk about improvements like Constitutional AI and why aligning AI with human values is an ongoing challenge.
By Mo Bhuiyan via NotebookLMIn this episode, we demystify how researchers teach AI models to behave helpfully and safely using Reinforcement Learning from Human Feedback (RLHF). We discuss why even very large models can generate undesired outputs and how RLHF addresses this by incorporating human preferences. You’ll learn how methods like InstructGPT were trained: first by gathering human-written demonstration responses, then by having humans rank model outputs to train a reward model, and finally using reinforcement learning (e.g. with PPO) to fine-tune the model so that it better aligns with what users want. We also talk about improvements like Constitutional AI and why aligning AI with human values is an ongoing challenge.