
Sign up to save your podcasts
Or


Read the full transcript here.
Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?
Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.
Staff
Music
Affiliates
By Spencer Greenberg4.8
132132 ratings
Read the full transcript here.
Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?
Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.
Staff
Music
Affiliates

2,673 Listeners

2,707 Listeners

26,318 Listeners

4,275 Listeners

2,463 Listeners

147 Listeners

936 Listeners

4,173 Listeners

97 Listeners

525 Listeners

566 Listeners

152 Listeners

43 Listeners

93 Listeners

1,099 Listeners