
Sign up to save your podcasts
Or


Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs.
Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications.
Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .
A transcript of this episode is here.
By Kimberly Nevala, Strategic Advisor - SAS4.8
1919 ratings
Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs.
Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications.
Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .
A transcript of this episode is here.

10,736 Listeners

26,383 Listeners

9,503 Listeners

87,200 Listeners

10,236 Listeners

12,260 Listeners

8,556 Listeners

5,482 Listeners

5,479 Listeners

16,097 Listeners

10,815 Listeners

3,425 Listeners

1,329 Listeners

1,267 Listeners

292 Listeners