
Sign up to save your podcasts
Or


In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-what is are the core of AI safety risk skepticism
-why AI safety research leaders themselves are so all over the map
-why journalism is failing so miserably to cover AI safety appropriately
-the drastic step the federal government could take to really slow Big AI down
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol...
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... 
➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk   #humanextinction  #romanyampolskiy #samaltman  #openai  #anthropic  #deepmind
 By The AI Risk Network
By The AI Risk Network4.4
88 ratings
In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.
Among the many topics discussed in this episode:
-what is are the core of AI safety risk skepticism
-why AI safety research leaders themselves are so all over the map
-why journalism is failing so miserably to cover AI safety appropriately
-the drastic step the federal government could take to really slow Big AI down
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol...
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... 
➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk   #humanextinction  #romanyampolskiy #samaltman  #openai  #anthropic  #deepmind

32,106 Listeners

1,065 Listeners

8,554 Listeners

210 Listeners

90 Listeners

491 Listeners

474 Listeners

186 Listeners

541 Listeners

208 Listeners

562 Listeners

134 Listeners

40 Listeners

10 Listeners

297 Listeners