Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
FAQs about BlueDot Narrated:How many episodes does BlueDot Narrated have?The podcast currently has 220 episodes available.
May 13, 2024Recent U.S. Efforts on AI PolicyAudio versions of blogs and papers from BlueDot courses. This high-level overview by CISA summarizes major US policies on AI at the federal level. Important items worth further investigation include Executive Order 14110, the voluntary commitments, the AI Bill of Rights, and Executive Order 13859.Original text: https://www.cisa.gov/ai/recent-efforts Author(s): The US Cybersecurity and Infrastructure Security AgencyA podcast by BlueDot Impact....more6minPlay
May 13, 2024AI Index Report 2024, Chapter 7: Policy and GovernanceAudio versions of blogs and papers from BlueDot courses. This yearly report from Stanford’s Center for Humane AI tracks AI governance actions and broader trends in policies and legislation by governments around the world in 2023. It includes a summary of major policy actions taken by different governments, as well as analyses of regulatory trends, the volume of AI legislation, and different focus areas governments are prioritizing in their interventions.Original text:https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024_Chapter_7.pdfAuthors:Nestor Maslej et al. A podcast by BlueDot Impact....more21minPlay
May 05, 2024The Policy Playbook: Building a Systems-Oriented Approach to Technology and National Security PolicyAudio versions of blogs and papers from BlueDot courses. This report by the Center for Security and Emerging Technology first analyzes the tensions and tradeoffs between three strategic technology and national security goals: driving technological innovation, impeding adversaries’ progress, and promoting safe deployment. It then identifies different direct and enabling policy levers, assessing each based on the tradeoffs they make.While this document is designed for US policymakers, most of its findings are broadly applicable.Original text:https://cset.georgetown.edu/wp-content/uploads/The-Policy-Playbook.pdfAuthors:Jack Corrigan, Melissa Flagg, and Dewi MurdickA podcast by BlueDot Impact....more57minPlay
May 04, 2024Strengthening Resilience to AI Risk: A Guide for UK PolicymakersAudio versions of blogs and papers from BlueDot courses. This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” While this document is designed for UK policymakers, most of its findings are broadly applicable.Original text:https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdfAuthors:Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar AviA podcast by BlueDot Impact....more25minPlay
May 03, 2024The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing CatastropheAudio versions of blogs and papers from BlueDot courses. This report by the Nuclear Threat Initiative primarily focuses on how AI's integration into biosciences could advance biotechnology but also poses potentially catastrophic biosecurity risks. It’s included as a core resource this week because the assigned pages offer a valuable case study into an under-discussed lever for AI risk mitigation: building resilience. Resilience in a risk reduction context is defined by the UN as “the ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.” As you’re reading, consider other areas where policymakers might be able to build a more resilient society to mitigate AI risk.Original text:https://www.nti.org/wp-content/uploads/2023/10/NTIBIO_AI_FINAL.pdfAuthors:Sarah R. Carter, Nicole E. Wheeler, Sabrina Chwalek, Christopher R. Isaac, and Jaime YassifA podcast by BlueDot Impact....more9minPlay
May 01, 2024What is AI Alignment?Audio versions of blogs and papers from BlueDot courses. To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.” Original text:https://aisafetyfundamentals.com/blog/what-is-ai-alignment/Author:Adam JonesA podcast by BlueDot Impact....more12minPlay
May 01, 2024Rogue AIsAudio versions of blogs and papers from BlueDot courses. This excerpt from CAIS’s AI Safety, Ethics, and Society textbook provides a deep dive into the CAIS resource from session three, focusing specifically on the challenges of controlling advanced AI systems.Original Text:https://www.aisafetybook.com/textbook/1-5Author:The Center for AI SafetyA podcast by BlueDot Impact....more35minPlay
April 29, 2024An Overview of Catastrophic AI RisksAudio versions of blogs and papers from BlueDot courses. This article from the Center for AI Safety provides an overview of ways that advanced AI could cause catastrophe. It groups catastrophic risks into four categories: malicious use, AI race, organizational risk, and rogue AIs. The article is a summary of a larger paper that you can read by clicking here.Original text:https://www.safe.ai/ai-riskAuthors:Dan Hendrycks, Thomas Woodside, Mantas MazeikaA podcast by BlueDot Impact....more46minPlay
April 23, 2024Future Risks of Frontier AIAudio versions of blogs and papers from BlueDot courses. This report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.Original text:https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdfAuthor:The UK Government Office for ScienceA podcast by BlueDot Impact....more41minPlay
April 23, 2024What risks does AI pose?Audio versions of blogs and papers from BlueDot courses. This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.Original text:https://aisafetyfundamentals.com/blog/ai-risks/Author:Adam JonesA podcast by BlueDot Impact....more25minPlay
FAQs about BlueDot Narrated:How many episodes does BlueDot Narrated have?The podcast currently has 220 episodes available.