
Sign up to save your podcasts
Or


Today's guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the "AI Futures" series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms "Human General Intelligence" (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you're interested in getting more perspectives on AI's longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
By Daniel Faggella4.4
162162 ratings
Today's guest is Virginia Dignum, Director of the AI Policy Lab, Professor of Computer Science at Umeå University, and an Associated Professor at Delft University of Technology. In this special episode of the "AI Futures" series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. Together, Virginia and Daniel hypothesize what human governance over artificial general intelligence (AGI) might look like, and the many challenges NGOs and international bodies face today in creating the foundations for these governing structures. Virginia challenges the prevalent AGI narrative, advocating instead for a focus on what she terms "Human General Intelligence" (HGI), emphasizing collaboration and human-machine augmentation over existential AI risks. The discussion explores why governing AI requires a socio-technical approach, ensuring accountability rests with humans rather than machines. Virginia also highlights the importance of global governance to address both the risks and equitable benefits of AI, touching on her hands-on experience with international policy groups. If you're interested in getting more perspectives on AI's longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2

1,101 Listeners

303 Listeners

342 Listeners

236 Listeners

155 Listeners

212 Listeners

105 Listeners

155 Listeners

209 Listeners

591 Listeners

102 Listeners

174 Listeners

26 Listeners

35 Listeners

46 Listeners