
Sign up to save your podcasts
Or
Today’s guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms “Human General Intelligence” (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you’re interested in getting more perspectives on AI’s longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
4.4
153153 ratings
Today’s guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms “Human General Intelligence” (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you’re interested in getting more perspectives on AI’s longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
1,012 Listeners
295 Listeners
325 Listeners
147 Listeners
189 Listeners
290 Listeners
100 Listeners
125 Listeners
145 Listeners
197 Listeners
443 Listeners
86 Listeners
29 Listeners
47 Listeners