
Sign up to save your podcasts
Or


Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
By Paul Middlebrooks4.8
134134 ratings
Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.

2,674 Listeners

26,308 Listeners

2,460 Listeners

540 Listeners

247 Listeners

937 Listeners

4,181 Listeners

505 Listeners

206 Listeners

304 Listeners

96 Listeners

530 Listeners

26 Listeners

142 Listeners

265 Listeners