
Sign up to save your podcasts
Or
---
client: agi_sf
project_id: core_readings
feed_id: agi_sf__alignment
narrator: pw
qa: mds
qa_time: 0h15m
---
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.
Original article:
https://openai.com/research/learning-from-human-preferences
Authors:
Dario Amodei, Paul Christiano, Alex Ray
---
This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.
Narrated by TYPE III AUDIO on behalf of BlueDot Impact.
Share feedback on this narration.
---
client: agi_sf
project_id: core_readings
feed_id: agi_sf__alignment
narrator: pw
qa: mds
qa_time: 0h15m
---
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better.
Original article:
https://openai.com/research/learning-from-human-preferences
Authors:
Dario Amodei, Paul Christiano, Alex Ray
---
This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.
Narrated by TYPE III AUDIO on behalf of BlueDot Impact.
Share feedback on this narration.