Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/... more
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 173 episodes available.
September 29, 2025AI and Leviathan: Part IBy Samuel HammondSource: https://www.secondbest.ca/p/ai-and-leviathan-part-iA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more15minPlay
September 19, 2025d/acc: One Year LaterBy Vitalik ButerinEthereum founder Vitalik Buterin describes how democratic, defensive and decentralised technologies could distribute AI's power across society rather than concentrating it, offering a middle path between unchecked technical acceleration and authoritarian control.Source:https://vitalik.eth.limo/general/2025/01/05/dacc2.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more44minPlay
September 18, 2025A Playbook for Securing AI Model WeightsBy Sella Nevo et al.In this report, RAND researchers identify real-world attack methods that malicious actors could use to steal AI model weights. They propose a five-level security framework that AI companies could implement to defend against different threats, from amateur hackers to nation-state operations.Source:https://www.rand.org/pubs/research_briefs/RBA2849-1.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more20minPlay
September 18, 2025AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security ThreatsBy Akash Wasil et al.This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units.Source:https://arxiv.org/pdf/2407.17347A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more10minPlay
September 18, 2025Resilience and Adaptation to Advanced AIBy Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more14minPlay
September 18, 2025Introduction to AI ControlBy Sarah Hastings-WoodhouseAI Control is a research agenda that aims to prevent misaligned AI systems from causing harm. It is different from AI alignment, which aims to ensure that systems act in the best interests of their users. Put simply, aligned AIs do not want to harm humans, whereas controlled AIs can't harm humans, even if they want to.Source:https://bluedot.org/blog/ai-controlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more11minPlay
September 18, 2025The Project: Situational AwarenessBy Leopold AschenbrennerA former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.Source:https://situational-awareness.ai/the-project/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more33minPlay
September 18, 2025Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?By Yoshua Bengio et al.This paper argues that building generalist AI agents poses catastrophic risks, from misuse by bad actors to a potential loss of human control. As an alternative, the authors propose “Scientist AI,” a non-agentic system designed to explain the world through theory generation and question-answering rather than acting in it. They suggest this path could accelerate scientific progress, including in AI safety, while avoiding the dangers of agency-driven AI.Source:https://arxiv.org/pdf/2502.15657A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more22minPlay
September 18, 2025The Intelligence CurseBy Luke Drago and Rudolf LaineThis section explores how the arrival of AGI could trigger an “intelligence curse,” where automation of all work removes incentives for states and companies to care about ordinary people. It frames the trillion-dollar race toward AGI as not just an economic shift, but a transformation in power dynamics and human relevance.Source:https://intelligence-curse.ai/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more2h 20minPlay
September 12, 2025AI Is Reviving Fears Around Bioterrorism. What’s the Real Risk?By Kyle HiebertThe global spread of large language models is heightening concerns that extremists could leverage AI to develop or deploy biological weapons. While some studies suggest chatbots only marginally improve bioterror capabilities compared to internet searches, other assessments show rapid year-on-year gains in AI systems’ ability to advise on acquiring and formulating deadly agents. Policymakers now face an urgent question: how real and imminent is the threat of AI-enabled bioterrorism?Source:https://www.cigionline.org/articles/ai-is-reviving-fears-around-bioterrorism-whats-the-real-risk/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more9minPlay
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 173 episodes available.