The Theory of Anything

Episode 49: AGI Alignment and Safety

08.01.2022 - By Bruce NielsonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Is Elon Musk right that Artificial General Intelligence (AGI) research is like 'summoning the demon' and should be regulated?

In episodes 48 and 49, we discussed how our genes 'align' our interests with their own utilizing carrots and sticks (pleasure/pain) or attention and perception. If our genes can create a General Intelligence (i.e. Universal Explainer) alignment and safety 'program' for us, what's to stop us from doing that to future Artificial General Intelligences (AGIs) that we create? 

But even if we can, should we?

"I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon." --Elon Musk

---

Send in a voice message: https://podcasters.spotify.com/pod/show/four-strands/message

Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support

More episodes from The Theory of Anything