
Sign up to save your podcasts
Or


People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.
By Reid Blackman4.9
5454 ratings
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.

3,985 Listeners

9,539 Listeners

46 Listeners

30,203 Listeners

112,408 Listeners

56,513 Listeners

3,579 Listeners

3,273 Listeners

261 Listeners

5,512 Listeners

449 Listeners

15,931 Listeners

20 Listeners

2 Listeners

9,340 Listeners