
Sign up to save your podcasts
Or


This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades).
Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right.
See the full article from this episode: https://danfaggella.com/zeng1
Watch the full episode on YouTube: https://youtu.be/jNfnYUcBlmM
This episode referred to the following other essays and resources:
-- AI Safety Connect - https://aisafetyconnect.com
-- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/
...
There three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai
By Daniel Faggella4.7
66 ratings
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades).
Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right.
See the full article from this episode: https://danfaggella.com/zeng1
Watch the full episode on YouTube: https://youtu.be/jNfnYUcBlmM
This episode referred to the following other essays and resources:
-- AI Safety Connect - https://aisafetyconnect.com
-- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/
...
There three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

2,680 Listeners

2,452 Listeners

1,060 Listeners

4,176 Listeners

1,601 Listeners

203 Listeners

95 Listeners

514 Listeners

28 Listeners

494 Listeners

548 Listeners

131 Listeners

265 Listeners

40 Listeners

134 Listeners