
Sign up to save your podcasts
Or
We hear a lot about harm from AI and how the big platforms are focused on using AI and user data to enhance their profits. What about developing AI for good for the rest of us? What would it take to design AI systems that are beneficial to humans?
In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research. Mark began studying AI in the early 1980s and completed his PhD in Computer Vision and Human Perception under David Mumford at Harvard. He has built companies and products in various AI fields including The Blindsight Corporation, a maker of assistive technologies for low vision and active aging, which was acquired by Amazon. Mark is also co-author of The AI Generation which examines how AI reshapes human values, trust and power around the world.
We talk with Mark about CHAI’s goal of reorienting AI research towards provably beneficial systems, why it’s hard to develop beneficial AI, variability in human thinking and preferences, the parallels between management OKRs and AI objectives, human-centered AI design and how AI might help humans realize the future we prefer.
Links:
Learn more about UC Berkeley CHAI
Subscribe to get Artificiality delivered to your email
Learn more about Sonder Studio
P.S. Thanks to Jonathan Coulton for our music
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
5
99 ratings
We hear a lot about harm from AI and how the big platforms are focused on using AI and user data to enhance their profits. What about developing AI for good for the rest of us? What would it take to design AI systems that are beneficial to humans?
In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research. Mark began studying AI in the early 1980s and completed his PhD in Computer Vision and Human Perception under David Mumford at Harvard. He has built companies and products in various AI fields including The Blindsight Corporation, a maker of assistive technologies for low vision and active aging, which was acquired by Amazon. Mark is also co-author of The AI Generation which examines how AI reshapes human values, trust and power around the world.
We talk with Mark about CHAI’s goal of reorienting AI research towards provably beneficial systems, why it’s hard to develop beneficial AI, variability in human thinking and preferences, the parallels between management OKRs and AI objectives, human-centered AI design and how AI might help humans realize the future we prefer.
Links:
Learn more about UC Berkeley CHAI
Subscribe to get Artificiality delivered to your email
Learn more about Sonder Studio
P.S. Thanks to Jonathan Coulton for our music
About Artficiality from Helen & Dave Edwards:
Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world.
We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.
Subscribe for free at https://www.artificiality.world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Learn more about Sonder Studio
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world.
#ai #artificialintelligence #generativeai #airesearch #complexity #futureofai
38,587 Listeners
43,880 Listeners
4,833 Listeners
43,385 Listeners
8,924 Listeners
111,110 Listeners
108 Listeners
4,107 Listeners
1,451 Listeners
258 Listeners
5,359 Listeners
15,037 Listeners
3,248 Listeners
991 Listeners