Share Ethical Machines
Share to email
Share to Facebook
Share to X
By Reid Blackman
5
1212 ratings
The podcast currently has 39 episodes available.
Visit ethicalmachinespodcast.com to subscribe to the new podcast feed and listen to the latest episodes.
Ethical Machines on hiatus until 20 June 2024.
There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.
Imagine we’re awash in high quality AI-generated creative content. Books, poems, podcasts, images, TV and Film. And imagine it’s every bit as moving as human-generated art. We cry, we laugh, we’re inspired. Does it matter that it was generated by an AI? Does it undermine the experience? I think it does, and I’ll try to convince you of just that point.
We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk.
Unless you don't mind decreased autonomy and increased narcissism
How bad is it and what could possibly fix it?
Jon Bateman is a senior fellow at the Carnegie Endowment for International Peace, where he focuses on global technology challenges at the intersection of national security, economics, politics, and society. His research areas include techno-nationalism, cyber operations, disinformation, and AI.
Bateman is the author of U.S.-China Technological “Decoupling”: A Strategy and Policy Framework (2022). Former Google CEO Eric Schmidt, in his foreword, called it “a major achievement” that “stands out for its ambition, clarity, and rigor” and “will remain a touchstone for years to come.” Bateman is also the co-author of Countering Disinformation Effectively: An Evidence-Based Policy Guide (2024). His other major works include a military assessment of Russia’s cyber operations in Ukraine and a proposal to reform cyber insurance for catastrophic and state-sponsored events.
Before joining Carnegie, Bateman was a special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr., serving as the chairman’s first civilian speechwriter and the lead analyst in the chairman’s internal think tank. Bateman previously worked in the Office of the Secretary of Defense, developing several key policies and organizations for military cyber operations, and at the Defense Intelligence Agency, leading teams responsible for assessing Iran’s senior leadership, decisionmaking, internal stability, and cyber activities.
Bateman’s writings have appeared in the Wall Street Journal, MSNBC, Politico, Slate, Harvard Business Review, Foreign Policy, and elsewhere. His TV and radio appearances include BBC News, NPR Morning Edition, and C-SPAN After Words. Bateman is a graduate of Harvard Law School and Johns Hopkins University.
Or should we value human deliberation even when the results are worse?
Can we train AI to be ethical the same way we teach children?
#AI #ethics #AIethics
Cameron Buckner’s research primarily concerns philosophical issues which arise in the study of non-human minds, especially animal cognition and artificial intelligence. He began his academic career in logic-based artificial intelligence. This research inspired an interest into the relationship between classical models of reasoning and the (usually very different) ways that humans and animals actually solve problems, which led him to the discipline of philosophy. He received a PhD in Philosophy at Indiana University in 2011 and an Alexander von Humboldt Postdoctoral Fellowship at Ruhr-University Bochum from 2011 to 2013. He just published a book with Oxford University Press that uses empiricist philosophy of mind (from figures such as Aristotle, Ibn Sina, John Locke, David Hume, William James, and Sophie de Grouchy) to understand recent advances in deep-neural-network-based artificial intelligence.
Can regulations curb the ethically disastrous tendencies of AI?
He previously worked as a Research Manager at Meta (formerly Facebook) on the Responsible AI, Civic Integrity and Social Impact teams. Before that, he worked as a Research Director at the Institute for the Future.
He was named to Business Insider’s AI 100 list for his work on AI governance, fairness and misinformation. He has published a book and numerous articles in outlets including The Guardian, BBC, Tech Policy Press and Adbusters. He has been interviewed and quoted by CNN, BBC, AP, Bloomberg, The Atlantic, and given dozens of talks around the world.
The podcast currently has 39 episodes available.
90,256 Listeners
24 Listeners