
Sign up to save your podcasts
Or
The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?
AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.
This means that your daily threat from AI is not from the machines themselves, but their bias. In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.
In this episode, find out:
Important Links & Mentions:
Resources:
Episode Transcript:
Today I’m talking about a very serious problem: artificial intelligence bias.
AI Ethics
The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?
Machines Built with Bias
AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can.
The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job.
More and more experts in the field are sounding the alarm. Machines, just...
4
44 ratings
The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?
AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.
This means that your daily threat from AI is not from the machines themselves, but their bias. In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.
In this episode, find out:
Important Links & Mentions:
Resources:
Episode Transcript:
Today I’m talking about a very serious problem: artificial intelligence bias.
AI Ethics
The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?
Machines Built with Bias
AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can.
The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job.
More and more experts in the field are sounding the alarm. Machines, just...