I have to roll my eyes at the constant click bait headlines on technology and ethics.
If we want to get anything done, we need to go deeper.
That’s where I come in. I’
... moreBy Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics.
If we want to get anything done, we need to go deeper.
That’s where I come in. I’
... more4.9
5252 ratings
The podcast currently has 18 episodes available.
California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.
What does it look like to integrate ethics into the teams that are building AI? How can we make ethics a practice and not a compliance checklist? In today’s episode I talk with Marc Steen, author of the book “Ethics for People Who Work in Tech,” who answers these questions and more.
Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.
From the best of season 1: I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI.
Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.
Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong.
It’s common to hear we need new regulations to avoid the risks of AI (bias, privacy violations, manipulation, etc.). But my guest, Dean Ball, thinks this claim is too hastily made. In fact, he argues, we don’t need a new regulatory regime tailored to AI. If he’s right, then in a way that’s good news, since regulations are so notoriously difficult to push through. But he emphasizes we still need a robust governance response to the risks at hand. What are those responses? Have a listen and find out!
Everyone knows biased or discriminatory AI bad and we need to get rid of it, right? Well, not so fast.
I’m bringing one of the best episodes from Season 1 back. I talk to David Danks, a professor of data science and philosophy at UCSD. He and his research team argue that we need to reconceive our approach to biased AI. In some cases, David thinks, it can be beneficial. Good policy - both corporate and regulatory - needs to take this into account.
Data about us is collected, aggregated, and shared in more ways than we can count. In some cases, this leads to great benefits. In others, a great deal of harm. But at the end of the day, the truth is that it’s all out of control. No individual, nor any private company, nor any government has a grip on what gets collected, what gets done with it, and what the societal impacts are. In this episode I talk to Aram Sinnreich and Jesse Gilbert about their new book, “The Secret Life of Data,” in which they explain the complexity and how we should begin to take back control.
The podcast currently has 18 episodes available.
8,951 Listeners
277 Listeners
43,877 Listeners
30,866 Listeners
7,728 Listeners
110,398 Listeners
55,937 Listeners
9,457 Listeners
57,826 Listeners
422 Listeners
327 Listeners
248 Listeners
5,184 Listeners
13,017 Listeners
156 Listeners