I have to roll my eyes at the constant click bait headlines on technology and ethics.
If we want to get anything done, we need to go deeper.
That’s where I come in. I’
... moreBy Reid Blackman
I have to roll my eyes at the constant click bait headlines on technology and ethics.
If we want to get anything done, we need to go deeper.
That’s where I come in. I’
... more4.9
5353 ratings
The podcast currently has 27 episodes available.
You might want more online content moderation so insane conspiracy theories don’t flourish. Sex slaves in Democrat pizza shops, climate change is a hoax, and so on. But is it irrational to believe these things? Is content moderation - whether in the form of censoring or labelling something as false - the morally right and/or effective strategy? In this discussion Neil Levy and I go back to basics about what it is to be rational and how that helps us answer our questions. Neil’s fascinating answer in a nutshell: they’re not irrational and content moderation isn’t a good strategy. This is, I have to say, great stuff. Enjoy!
From the best of season 1. Part 2 of my conversation with Alex.
There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.
From the best of season 1. Part 1 of my conversation with Alex Grzankowski.
It looks like ChatGPT understands what you’re asking. It looks like ChatGPT understands what it’s saying in reply.
But that’s not the case.
Alex and I discuss what understanding is, for both people and machines and what it would take for a machine to understand what it’s saying.
One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.
With so many laws and so much case law, it’s virtually impossible for the layperson to know what’s legal and illegal. But what if AI can synthesize all that information and deliver clear legal guidance to the average person? Is such a thing possible? Is it desirable?
Author of the new book “Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation” discusses, well, what do you think? It’s right there in the title. Go have a listen.
From the best of season 1: You might think it's outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the "attention" economy is often objected to on just these grounds.
On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?
Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn't really matter all that much. Carissa has powerful arguments against me.
This conversation goes way deeper than 'privacy good/data collection bad' statements we see all the time. I hope you enjoy!
We use the wrong metaphor for thinking about AI, Shannon Vallor argues, and bad thinking leads to bad results. We need to stop thinking about AI as being an agent or having a mind, and stop thinking of the human mind/brain as a kind of software/hardware configuration. All of this is misguided. Instead, we should think of AI as a mirror, reflecting our images in a sometimes helpful, sometimes distorted way. Our shifting to this new metaphor, she says, will lead us to better, and ethically better, AI.
Canada Air blamed the LLM chatbot for giving false information about their bereavement fare policy. They lost the law suit because of course it’s not the chatbot’s fault. But what would it take to hold chatbots responsible for what they say? That’s the topic of discussion with my guest, philosopher Emma Borg.
California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.
The podcast currently has 27 episodes available.
4,369 Listeners
4,241 Listeners
1,909 Listeners
30,751 Listeners
6,455 Listeners
8,008 Listeners
10,599 Listeners
1,362 Listeners
4,425 Listeners
236 Listeners
430 Listeners
247 Listeners
5,242 Listeners
13,691 Listeners
345 Listeners