
Sign up to save your podcasts
Or
Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.
Session Summary
We discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health.
Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcasts
Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.
Hosted by Allison Duettmann and Beatrice Erkers
Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram
Explore every word spoken on this podcast through Fathom.fm.
Hosted on Acast. See acast.com/privacy for more information.
5
44 ratings
Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.
Session Summary
We discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health.
Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcasts
Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.
Hosted by Allison Duettmann and Beatrice Erkers
Follow Us: Twitter | Facebook | LinkedIn | Existential Hope Instagram
Explore every word spoken on this podcast through Fathom.fm.
Hosted on Acast. See acast.com/privacy for more information.
4,259 Listeners
26,318 Listeners
1,009 Listeners
2,402 Listeners
1,947 Listeners
1,517 Listeners
608 Listeners
325 Listeners
4,123 Listeners
202 Listeners
1,080 Listeners
354 Listeners
258 Listeners
472 Listeners
429 Listeners