
Sign up to save your podcasts
Or
When the world thinks about ethical AI, thoughts wander to movie portrayals of intelligent machines used for nefarious reasons. In the real world, that’s still a possibility, but many point to AI as the future of understanding data. Sharing his thoughts on the subject, Peter Judge, Head Editor of Datacenter Dynamics, joined host Raymond Hawkins on Not Your Father’s Data Center.
Judge recently authored a feature story in the publication about AI and its problems.
Judge explained the concept of ethical AI, “When you have a system that can calculate and think, you have to understand the algorithms behind it and why it’s coming to such conclusions. It’s as much about the people using it than the AI itself.”
AI isn’t going to produce “wrong” conclusions, but it may not answer the questions intended. “The ethical part is about how it’s being used, not the technology,” Judge said. He pointed to the example of lung x-rays to determine which patients had COVID. “A professor looked into this, and the AI couldn’t diagnose because the training data wasn’t accurate and there were other issues.”
The promise of AI was to churn through mountains of data and drive answers, but AI does what it’s programmed to do. It’s not sentient, so the proximity for human error is abundant. “Increasing the size of the haystack doesn’t mean you’ll find the needle,” Judge commented.
The other area of ethics is deployment for good or possibly evil. Judge noted the example of AI analyzing online interactions and activities, then using its learning to target advertising that was much more on the unethical side.
AI is important to the future, but the humans programming and using it are still on ethically shaky ground.
5
33 ratings
When the world thinks about ethical AI, thoughts wander to movie portrayals of intelligent machines used for nefarious reasons. In the real world, that’s still a possibility, but many point to AI as the future of understanding data. Sharing his thoughts on the subject, Peter Judge, Head Editor of Datacenter Dynamics, joined host Raymond Hawkins on Not Your Father’s Data Center.
Judge recently authored a feature story in the publication about AI and its problems.
Judge explained the concept of ethical AI, “When you have a system that can calculate and think, you have to understand the algorithms behind it and why it’s coming to such conclusions. It’s as much about the people using it than the AI itself.”
AI isn’t going to produce “wrong” conclusions, but it may not answer the questions intended. “The ethical part is about how it’s being used, not the technology,” Judge said. He pointed to the example of lung x-rays to determine which patients had COVID. “A professor looked into this, and the AI couldn’t diagnose because the training data wasn’t accurate and there were other issues.”
The promise of AI was to churn through mountains of data and drive answers, but AI does what it’s programmed to do. It’s not sentient, so the proximity for human error is abundant. “Increasing the size of the haystack doesn’t mean you’ll find the needle,” Judge commented.
The other area of ethics is deployment for good or possibly evil. Judge noted the example of AI analyzing online interactions and activities, then using its learning to target advertising that was much more on the unethical side.
AI is important to the future, but the humans programming and using it are still on ethically shaky ground.
1,254 Listeners
1,008 Listeners
1,683 Listeners
1,754 Listeners
2,329 Listeners
121 Listeners
787 Listeners
1,463 Listeners
18 Listeners
10 Listeners
8,385 Listeners
207 Listeners
7 Listeners
337 Listeners
15 Listeners