
Sign up to save your podcasts
Or
This week Dan and Ray go in the opposite direction from the last two episodes. After talking about AI for Good and AI for Accessibility, this week they go deeper into the ways that AI can be used in ways that can disadvantage people and decisions. Often the border line between 'good' and 'evil' can be very fine, and the same artificial intelligence technology can be used for good or evil depending on the unwitting (or witting) decisions!
During the chat, Ray discovered that Dan is more of a 'Dr Evil' than he'd previously thought, and together they discover that there are differences in how people perceive 'good' and 'evil' when it comes to AI's use in education. This episode is a lot less focused on the technology, and instead spends all the time focused on the outcomes of using it.
Ray mentions the "MIT Trolley Problem", which is actually two things! The Trolley Problem, which is the work of English philosopher Philippa Foot, is a thought experiment in ethics about taking decisions on diverting a runaway tram. And the MIT Moral Machine, which is built upon this work, is about making judgements about driverless cars. The MIT Moral Machine website asks you to make the moral decisions and decide upon the consequences. It's a great activity for colleagues and for students, because it leads to a lot of discussion.
Two other links mentioned in the podcast are the CSIRO Data61 discussion paper as part of the consultation about AI ethics in Australia (downloadable here: https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/) and the Microsoft AI Principles (available here: https://www.microsoft.com/en-us/AI/our-approach-to-ai)
3.7
33 ratings
This week Dan and Ray go in the opposite direction from the last two episodes. After talking about AI for Good and AI for Accessibility, this week they go deeper into the ways that AI can be used in ways that can disadvantage people and decisions. Often the border line between 'good' and 'evil' can be very fine, and the same artificial intelligence technology can be used for good or evil depending on the unwitting (or witting) decisions!
During the chat, Ray discovered that Dan is more of a 'Dr Evil' than he'd previously thought, and together they discover that there are differences in how people perceive 'good' and 'evil' when it comes to AI's use in education. This episode is a lot less focused on the technology, and instead spends all the time focused on the outcomes of using it.
Ray mentions the "MIT Trolley Problem", which is actually two things! The Trolley Problem, which is the work of English philosopher Philippa Foot, is a thought experiment in ethics about taking decisions on diverting a runaway tram. And the MIT Moral Machine, which is built upon this work, is about making judgements about driverless cars. The MIT Moral Machine website asks you to make the moral decisions and decide upon the consequences. It's a great activity for colleagues and for students, because it leads to a lot of discussion.
Two other links mentioned in the podcast are the CSIRO Data61 discussion paper as part of the consultation about AI ethics in Australia (downloadable here: https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/) and the Microsoft AI Principles (available here: https://www.microsoft.com/en-us/AI/our-approach-to-ai)
72 Listeners
370 Listeners
142 Listeners
193 Listeners
81 Listeners
146 Listeners
179 Listeners
617 Listeners
110 Listeners
136 Listeners
190 Listeners
6 Listeners
36 Listeners
1 Listeners
44 Listeners