
Sign up to save your podcasts
Or
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.
4.9
5353 ratings
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.
26,359 Listeners
5 Listeners
611 Listeners
86,425 Listeners
111,352 Listeners
59,224 Listeners
5,304 Listeners
523 Listeners
385 Listeners
5,420 Listeners
12,731 Listeners
15,229 Listeners
30 Listeners
10,342 Listeners
455 Listeners