
Sign up to save your podcasts
Or
Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
4.9
133133 ratings
Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
1,578 Listeners
241 Listeners
15,032 Listeners
482 Listeners
306 Listeners
1,044 Listeners
919 Listeners
4,133 Listeners
485 Listeners
88 Listeners
379 Listeners
461 Listeners
128 Listeners
500 Listeners
242 Listeners