
Sign up to save your podcasts
Or


Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.
By Paul Middlebrooks4.8
134134 ratings
Support the show to get full episodes, full archive, and join the Discord community.
Steve Byrnes is a physicist turned AGI safety researcher. He's concerned that when we create AGI, whenever and however that might happen, we run the risk of creating it in a less than perfectly safe way. AGI safety (AGI not doing something bad) is a wide net that encompasses AGI alignment (AGI doing what we want it to do). We discuss a host of ideas Steve writes about in his Intro to Brain-Like-AGI Safety blog series, which uses what he has learned about brains to address how we might safely make AGI.

2,673 Listeners

761 Listeners

523 Listeners

431 Listeners

315 Listeners

107 Listeners

900 Listeners

931 Listeners

480 Listeners

4,152 Listeners

504 Listeners

90 Listeners

505 Listeners

139 Listeners

491 Listeners