As AI continues to make its way into more aspect of life, some interesting trends about how the public feels about these new, increasingly pervasive services have been observed. The developers of AI promise that their systems will produce reliable, comprehensive, and bias-free results. But national surveys consistently show that the public is sceptical towards AI. And yet experimental studies show that in practice, people do trust AI more than one might suspect.
Can increasing AI literacy help to overcome this deficit, and teach us what to trust when it comes to AI, and where we’re right to be cautious? And if so, how should literacy initiatives balance goals to learn how AI works in practice, and how AI could or should work in the future?
Today’s guest, Dr Heather Ford, has been thinking extensively about these issues. She’s an ARC Future Fellow and Professor in the School of Communications at UTS. She is the Coordinator of the UTS Data and AI Ethics Cluster, Affiliate of the UTS Data Science Institute, and Associate of the UTS Centre for Media Transition. She was appointed to the International Panel on the Information Environment (IPIE) in 2023.
Heather Ford is currently conducting research funded by the Australian Research Council and the Wikimedia Foundation on Wikipedia bias, question and answering technologies, digital literacy and the impact of generative AI on our information environment. Previously she has worked for global technology corporations and non-profits in the US, UK, South Africa and Kenya. Her research focuses on the social implications of media technologies and the ways in which they might be better designed to prevent misinformation, social exclusion, and harms as a result of algorithmic bias.
Hosted on Acast. See acast.com/privacy for more information.