
Sign up to save your podcasts
Or
As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.
One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.
Topics discussed include:
*) The meaning and importance of "Responsible AI"
*) Connections and contrasts with "AI ethics" and "AI safety"
*) The advantages of formal AI governance processes
*) Recommendations for the operation of an AI ethics board
*) Anticipating the operation of the EU's AI Act
*) How different intuitions of fairness can produce divergent results
*) Examples where transparency has been limited
*) The potential future evolution of the discipline of Responsible AI.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governance
An Acxiom podcast where we discuss marketing made better, bringing you real...
Listen on: Apple Podcasts Spotify
4.7
99 ratings
As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.
One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.
Topics discussed include:
*) The meaning and importance of "Responsible AI"
*) Connections and contrasts with "AI ethics" and "AI safety"
*) The advantages of formal AI governance processes
*) Recommendations for the operation of an AI ethics board
*) Anticipating the operation of the EU's AI Act
*) How different intuitions of fairness can produce divergent results
*) Examples where transparency has been limited
*) The potential future evolution of the discipline of Responsible AI.
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Some follow-up reading:
https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governance
An Acxiom podcast where we discuss marketing made better, bringing you real...
Listen on: Apple Podcasts Spotify
5,389 Listeners
401 Listeners
26,462 Listeners
1,032 Listeners
10,694 Listeners
613 Listeners
198 Listeners
407 Listeners
442 Listeners
3,276 Listeners
981 Listeners
477 Listeners
987 Listeners
220 Listeners
2,285 Listeners