Lincoln Cannon

AI Apocalypse


Listen Later

AI is taking over the world, it seems. ChatGPT is doing homework. Midjourney is revolutionizing art. And Copilot is writing code. For those who haven’t been paying attention, it may seem like all of this has come out of nowhere. Suddenly, anyone with an Internet connection can have a conversation with an AI that would pass most historical conceptions of the Turing Test. And entire industries that were once safely out of reach from legacy information technology are now being transformed. Even for those of us who’ve been paying attention, even anticipating such advances, there’s still something surreal about it all. It’s no wonder that some people have become deeply troubled by the change. Maybe that it includes you. Or maybe it’s about to include you, as you read the next sentence. The troubled people, in this case, include not a small number of experts in the field of AI and adjacent areas of study. A noteworthy example is Eliezer Yudkowsky. He’s a decision theorist and founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization that focuses on the development of safe AI – or what he would describe as AI that’s aligned with human values. Eliezer recently appeared on Bankless Shows, where he was interviewed about recent developments in AI and proclaimed quite seriously, “we’re all gonna die.” [ Visit the webpage to view the media. ] In his serious concern with AI, Eliezer is far from alone. Future of Life Institute The Future of Life Institute (FLI) is a non-profit research organization that aims to mitigate existential risks facing humanity, particularly those related to emerging technologies such as artificial intelligence, biotechnology, and nuclear weapons. FLI recently published an open letter regarding AI experiments. As of today, it has over 14K signatures, including many from active researchers. The letter calls for a six-month pause on the development of artificial intelligence systems that are more powerful than the newest version of ChatGPT. It argues that AI labs should use this pause to develop safety protocols, which should then be audited and overseen by independent experts. The letter also calls for development of robust AI governance systems, including regulation of highly capable AI systems, and liability for AI-caused harm. It argues that humanity can enjoy a flourishing future with AI, but only if we plan and manage its development carefully. Eliezer Yudkowsky Eliezer didn’t sign FLI’s open letter. Does he think it was asking for too much? No. To the contrary, as he wrote in an editorial on the TIME website, “pausing AI developments isn’t enough.” “We need to shut it all down,” continued Eliezer. Why? He argues that it’s difficult to predict thresholds that will result in the creation of superhuman AI, and that there’s risk that labs could unintentionally cross critical thresholds without noticing. If they do, speculates Eliezer, the most likely outcome is that everyone on Earth will die. Eliezer argues that, absent the ability to imbue AI with care for sentient life, there’s high risk that superhuman AI will not do what humans want. While it’s possible in principle to create an AI that cares for sentient life, current science and technology aren’t adequate. And the result of conflict with superhuman intelligence would likely be a total loss for humanity. Eliezer observes that there’s no plan for how to build superhuman AI and survive the consequences. The current plans of AI labs, such as OpenAI and DeepMind, are insufficient. So we urgently need a more serious approach to mitigating the risks of superhuman AI. And that could take a lot longer than six months, possibly even decades, as illustrated by efforts to address similar risks associated with nuclear weapons. But not everyone agrees with Eliezer. Max More Max More is a philosopher and futurist. He’s best known for his work as a leading proponent of Transhumanism. Max has written extensively about the ethics of emerging ...
...more
View all episodesView all episodes
Download on the App Store

Lincoln CannonBy Lincoln Cannon


More shows like Lincoln Cannon

View all
Relatable with Allie Beth Stuckey by Blaze Podcast Network

Relatable with Allie Beth Stuckey

21,245 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,525 Listeners

SJWellFire: Final Days Report by Scott

SJWellFire: Final Days Report

3 Listeners