Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/... more
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.
May 04, 2024Strengthening Resilience to AI Risk: A Guide for UK PolicymakersThis report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” While this document is designed for UK policymakers, most of its findings are broadly applicable.Original text:https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdfAuthors:Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar AviA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more25minPlay
May 03, 2024The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing CatastropheThis report by the Nuclear Threat Initiative primarily focuses on how AI's integration into biosciences could advance biotechnology but also poses potentially catastrophic biosecurity risks. It’s included as a core resource this week because the assigned pages offer a valuable case study into an under-discussed lever for AI risk mitigation: building resilience. Resilience in a risk reduction context is defined by the UN as “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.” As you’re reading, consider other areas where policymakers might be able to build a more resilient society to mitigate AI risk.Original text:https://www.nti.org/wp-content/uploads/2023/10/NTIBIO_AI_FINAL.pdfAuthors:Sarah R. Carter, Nicole E. Wheeler, Sabrina Chwalek, Christopher R. Isaac, and Jaime YassifA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more9minPlay
May 01, 2024What is AI Alignment?To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.” Original text:https://aisafetyfundamentals.com/blog/what-is-ai-alignment/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more12minPlay
May 01, 2024Rogue AIsThis excerpt from CAIS’s AI Safety, Ethics, and Society textbook provides a deep dive into the CAIS resource from session three, focusing specifically on the challenges of controlling advanced AI systems.Original Text:https://www.aisafetybook.com/textbook/1-5Author:The Center for AI SafetyA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more35minPlay
April 29, 2024An Overview of Catastrophic AI RisksThis article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.Original text:https://www.safe.ai/ai-riskAuthors:Dan Hendrycks, Thomas Woodside, Mantas MazeikaA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more46minPlay
April 23, 2024Future Risks of Frontier AIThis report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.Original text:https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdfAuthor:The UK Government Office for ScienceA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more41minPlay
April 23, 2024What risks does AI pose?This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.Original text:https://aisafetyfundamentals.com/blog/ai-risks/Author:Adam JonesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more25minPlay
April 22, 2024AI Could Defeat All Of Us CombinedThis blog post from Holden Karnofsky, Open Philanthropy’s Director of AI Strategy, explains how advanced AI might overpower humanity. It summarizes superintelligent takeover arguments and provides a scenario where human-level AI disempowers humans without achieving superintelligence. As Holden summarizes: “if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem."Original text:https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#the-standard-argument-superintelligence-and-advanced-technologyAuthors:Holden KarnofskyA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more24minPlay
April 16, 2024The Economic Potential of Generative AI: The Next Productivity FrontierThis report from McKinsey discusses the huge potential for economic growth that generative AI could bring, examining key drivers and exploring potential productivity boosts in different business functions. While reading, evaluate how realistic its claims are, and how this might affect the organization you work at (or organizations you might work at in the future).Original text:https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontierAuthors:Michael Chui et al.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more43minPlay
April 16, 2024Positive AI Economic FuturesThis insight report from the World Economic Forum summarizes some positive AI outcomes. Some proposed futures include AI enabling shared economic benefit, creating more fulfilling jobs, or allowing humans to work less – giving them time to pursue more satisfying activities like volunteering, exploration, or self-improvement. It also discusses common problems that prevent people from making good predictions about the future.Note: this report was released before ChatGPT, which seems to have shifted expert predictions about when AI systems might be broadly capable at completing most cognitive labor (see Section 3 exhibit 6 of the McKinsey resource below). Keep this in mind when reviewing section 1.1.Original text:https://www3.weforum.org/docs/WEF_Positive_AI_Economic_Futures_2021.pdfAuthors:Stuart Russell, Daniel SusskindA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more22minPlay
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.