Whose responsibility is it if an AI goes haywire and causes damage to a human? And how should we think about their rights, responsibilities, and morality? These are serious questions we're going to have to grapple with far more often in the future. Prof David Gunkel, this week's guest, argues we need to be thinking about it now. In this conversation, we cover:
00:00 - Intro
01:45 - Defining a person and defining a thing
02:45 - Blurring the lines with robots
05:15 - Anthropomorphisaton
07:00 - AI friends and conscious minds
09:30 - Talking to machines and AI girlfriends
12:00 - Marc Andreesen, technological things & techno-optimism
15:30 - Human treatment of animals
18:00 - AIs taking decisions in the real world
19:30 - Talking about robots rather than AIs
21:30 - Cloud-based AI, LLMs, and AIs falling in love with each other
25:30 - Chinese, European, and American approaches to regulating AI
27:30 - AI risk and doomerism
29:30 - Open source AI, regulatory capture, and ethics-washing
32:30 - Artificial Superintelligence
34:30 - Doomers creating an emergency
35:30 - How to legislate robots and AI
41:30 - The future and the responsibilities of AIs
44:30 - Instrumental goals, recommendations algorithms and ultimate responsibility
50:30 - Unwittingly heralding the downfall of civilization
52:30 - Sam Altman and the smith exponential of technology
If you're interested in reading more, the book we discuss can be downloaded here: https://direct.mit.edu/books/oa-monograph/5641/Person-Thing-RobotA-Moral-and-Legal-Ontology-for
Hosted on Acast. See acast.com/privacy for more information.