
Sign up to save your podcasts
Or
Today’s guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms “Human General Intelligence” (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you’re interested in getting more perspectives on AI’s longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
4.4
153153 ratings
Today’s guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms “Human General Intelligence” (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you’re interested in getting more perspectives on AI’s longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
1,040 Listeners
298 Listeners
331 Listeners
156 Listeners
192 Listeners
298 Listeners
106 Listeners
128 Listeners
142 Listeners
201 Listeners
491 Listeners
94 Listeners
48 Listeners
46 Listeners