Machine Ethics Podcast episodes

51. AGI Safety and Alignment with Robert Miles


Listen Later

This episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more
...more
View all episodesView all episodes
Download on the App Store

Machine Ethics Podcast episodesBy Ben Byford and friends

  • 5
  • 5
  • 5
  • 5
  • 5

5

5 ratings