Future of Life Institute Podcast

Roman Yampolskiy on Objections to AI Safety

05.26.2023 - By Future of Life InstitutePlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/

Timestamps:

00:00 Objections to AI safety

15:06 Will robots make AI risks salient?

27:51 Was early AI safety research useful?

37:28 Impossibility results for AI

47:25 How much risk should we accept?

1:01:21 Exponential or S-curve?

1:12:27 Will AI accidents increase?

1:23:56 Will we know who was right about AI?

1:33:33 Difference between AI output and AI model

Social Media Links:

➡️ WEBSITE: https://futureoflife.org

➡️ TWITTER: https://twitter.com/FLIxrisk

➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/

➡️ META: https://www.facebook.com/futureoflifeinstitute

➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

More episodes from Future of Life Institute Podcast