AI Safety Fundamentals

The Project: Situational Awareness


Listen Later

By Leopold Aschenbrenner

A former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.

Source:

https://situational-awareness.ai/the-project/?utm_source=bluedot-impact

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

...more
View all episodesView all episodes
Download on the App Store

AI Safety FundamentalsBy BlueDot Impact