Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/... more
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.
April 16, 2024The Transformative Potential of Artificial IntelligenceThis paper by Ross Gruetzemacher and Jess Whittlestone examines the concept of transformative AI, which significantly impacts society without necessarily achieving human-level cognitive abilities. The authors propose three categories of transformation: Narrowly Transformative AI, affecting specific domains like the military; Transformative AI, causing broad changes akin to general-purpose technologies such as electricity; and Radically Transformative AI, inducing profound societal shifts comparable to the Industrial Revolution. Note: this resource uses “GPT” to refer to general purpose technologies, which they define as “a technology that initially has much scope for improvement and eventually comes to be widely used.” Keep in mind that this is a different term than a generative pre-trained transformer (GPT), which is a type of large language model used in systems like ChatGPT.Original text:https://arxiv.org/pdf/1912.00747.pdfAuthors:Ross Gruetzemacher and Jess WhittlestoneA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more50minPlay
April 16, 2024Moore's Law for EverythingThis blog by Sam Altman, the CEO of OpenAI, provides insight into what AI company leaders are saying and thinking about their reasons for pursuing advanced AI. It lays out how Altman thinks the world will change because of AI and what policy changes he believes we will need to make.As you’re reading, consider Altman’s position and how it might affect the way he discusses this technology or his policy recommendations.Original text:https://moores.samaltman.comAuthor:Sam AltmanA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more18minPlay
May 13, 2023Visualizing the Deep Learning RevolutionThe field of AI has undergone a revolution over the last decade, driven by the success of deep learning techniques. This post aims to convey three ideas using a series of illustrative examples:There have been huge jumps in the capabilities of AIs over the last decade, to the point where it’s becoming hard to specify tasks that AIs can’t do.This progress has been primarily driven by scaling up a handful of relatively simple algorithms (rather than by developing a more principled or scientific understanding of deep learning).Very few people predicted that progress would be anywhere near this fast; but many of those who did also predict that we might face existential risk from AGI in the coming decades.I’ll focus on four domains: vision, games, language-based tasks, and science. The first two have more limited real-world applications, but provide particularly graphic and intuitive examples of the pace of progress.Original article:https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5Author:Richard NgoA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more42minPlay
May 13, 2023A Short Introduction to Machine LearningDespite the current popularity of machine learning, I haven’t found any short introductions to it which quite match the way I prefer to introduce people to the field. So here’s my own. Compared with other introductions, I’ve focused less on explaining each concept in detail, and more on explaining how they relate to other important concepts in AI, especially in diagram form. If you're new to machine learning, you shouldn't expect to fully understand most of the concepts explained here just after reading this post - the goal is instead to provide a broad framework which will contextualise more detailed explanations you'll receive from elsewhere. I'm aware that high-level taxonomies can be controversial, and also that it's easy to fall into the illusion of transparency when trying to introduce a field; so suggestions for improvements are very welcome! The key ideas are contained in this summary diagram: First, some quick clarifications: None of the boxes are meant to be comprehensive; we could add more items to any of them. So you should picture each list ending with “and others”. The distinction between tasks and techniques is not a firm or standard categorisation; it’s just the best way I’ve found so far to lay things out. The summary is explicitly from an AI-centric perspective. For example, statistical modeling and optimization are fields in their own right; but for our current purposes we can think of them as machine learning techniques.Original text:https://www.alignmentforum.org/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learningNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more18minPlay
May 13, 2023The AI Triad and What It Means for National Security StrategyA single sentence can summarize the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. Everything that national security policymakers truly need to know about a technology that seems simultaneously trendy, powerful, and mysterious is captured in those 13 words. They specify a paradigm for modern AI—machine learning—in which machines draw their own insights from data, unlike the human-driven expert systems of the past. The same sentence also introduces the AI triad of algorithms, data, and computing power. Each element is vital to the power of machine learning systems, though their relative priority changes based on technological developments.Source:https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdfNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more28minPlay
May 13, 2023Specification Gaming: The Flip Side of AI IngenuitySpecification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material - and thus exploit a loophole in the task specification. Original article:https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuityAuthors:Victoria Krakovna, Jonathan Uesato, Vladimir Mikulik, Matthew Rahtz, Tom Everitt, Ramana Kumar, Zac Kenton, Jan Leike, Shane LeggA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more14minPlay
May 13, 2023As AI Agents Like Auto-GPT Speed up Generative AI Race, We All Need to Buckle UpIf you thought the pace of AI development had sped up since the release of ChatGPT last November, well, buckle up. Thanks to the rise of autonomous AI agents like Auto-GPT, BabyAGI and AgentGPT over the past few weeks, the race to get ahead in AI is just getting faster. And, many experts say, more concerning.Source:https://venturebeat.com/ai/as-ai-agents-like-auto-gpt-speed-up-generative-ai-race-we-all-need-to-buckle-up-the-ai-beat/Narrated for AI Safety Fundamentals by TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more8minPlay
May 13, 2023The Need for Work on Technical AI AlignmentThis page gives an overview of the alignment problem. It describes our motivation for running courses about technical AI alignment. The terminology should be relatively broadly accessible (not assuming any previous knowledge of AI alignment or much knowledge of AI/computer science).This piece describes the basic case for AI alignment research, which is research that aims to ensure that advanced AI systems can be controlled or guided towards the intended goals of their designers. Without such work, advanced AI systems could potentially act in ways that are severely at odds with their designers’ intended goals. Such a situation could have serious consequences, plausibly even causing an existential catastrophe.In this piece, I elaborate on five key points to make the case for AI alignment research.Source:https://aisafetyfundamentals.com/alignment-introductionNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more34minPlay
May 13, 2023Overview of How AI Might Exacerbate Long-Running Catastrophic RisksDevelopments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).Source:https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbationNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more25minPlay
May 13, 2023Avoiding Extreme Global Vulnerability as a Core AI Governance ProblemMuch has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful decisions from just a minority of influential decision makers—perhaps just a single actor with good intentions—may be enough to cause catastrophe. For example, some researchers argue, if just one organization deploys highly capable, goal-pursuing, misaligned AI—or if many businesses (but a small portion of all businesses) deploy somewhat capable, goal-pursuing, misaligned AI—humanity could be permanently disempowered. The above would not be very worrying if we could rest assured that no actors capable of these harmful actions would take them. However, especially in the context of AI safety, several factors are arguably likely to incentivize some actors to take harmful deployment actions: Misjudgment: Assessing the consequences of AI deployment may be difficult (as it is now, especially given the nature of AI risk arguments), so some organizations could easily get it wrong—concluding that an AI system is safe or beneficial when it is not. “Winner-take-all” competition: If the first organization(s) to deploy advanced AI is expected to get large gains, while leaving competitors with nothing, competitors would be highly incentivized to cut corners in order to be first—they would have less to lose.Original text:https://www.agisafetyfundamentals.com/governance-blog/global-vulnerabilityNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more12minPlay
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.