
Sign up to save your podcasts
Or
Elon Musk has a new AI company, xAI. I appreciate that he seems very concerned about alignment. From his Twitter Spaces discussion:
I think I have been banging the drum on AI safety now for a long time. If I could press pause on AI or advanced AI digital superintelligence, I would. It doesn’t seem like that is realistic . . .
I could talk about this for a long time, it’s something that I’ve thought about for a really long time and actually was somewhat reluctant to do anything in this space because I am concerned about the immense power of a digital superintelligence. It’s something that, I think is maybe hard for us to even comprehend.
He describes his alignment strategy in that discussion and a later followup:
The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality.
And even if you're extremely good at how you program morality into AI, there's the morality inversion problem - Waluigi - if you program Luigi, you inherently get Waluigi. I would be concerned about the way OpenAI is programming AI - about this is good, and that's not good.
https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan
4.8
123123 ratings
Elon Musk has a new AI company, xAI. I appreciate that he seems very concerned about alignment. From his Twitter Spaces discussion:
I think I have been banging the drum on AI safety now for a long time. If I could press pause on AI or advanced AI digital superintelligence, I would. It doesn’t seem like that is realistic . . .
I could talk about this for a long time, it’s something that I’ve thought about for a really long time and actually was somewhat reluctant to do anything in this space because I am concerned about the immense power of a digital superintelligence. It’s something that, I think is maybe hard for us to even comprehend.
He describes his alignment strategy in that discussion and a later followup:
The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality.
And even if you're extremely good at how you program morality into AI, there's the morality inversion problem - Waluigi - if you program Luigi, you inherently get Waluigi. I would be concerned about the way OpenAI is programming AI - about this is good, and that's not good.
https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan
4,224 Listeners
13,358 Listeners
26,370 Listeners
2,386 Listeners
87 Listeners
3,755 Listeners
87 Listeners
387 Listeners
128 Listeners
199 Listeners
47 Listeners
90 Listeners
75 Listeners
144 Listeners
114 Listeners