Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/... more
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.
May 13, 2023The State of AI in Different Countries — An OverviewSome are concerned that regulating AI progress in one country will slow that country down, putting it at a disadvantage in a global AI arms race. Many proponents of AI regulation disagree; they have pushed back on the overall framework, pointed out serious drawbacks and limitations of racing, and argued that regulations do not have to slow progress down. Another disagreement is about whether countries are in fact in a neck and neck arms race; some believe that the United States and its allies have a significant lead which would allow for regulation even if that does come at the cost of slowing down AI progress. [1]This overview uses simple metrics and indicators to illustrate and discuss the state of frontier AI development in different countries — and relevant factors that shape how the picture might change. Source:https://aisafetyfundamentals.com/governance-blog/state-of-ai-in-different-countriesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more37minPlay
May 13, 2023What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute MonitoringAs advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework’s primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners’ models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. ’21.]Source:https://arxiv.org/pdf/2303.11341.pdfNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more33minPlay
May 13, 2023A Tour of Emerging Cryptographic TechnologiesHistorically, progress in the field of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the field may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies) and on techniques for computing on confidential data (such as secure multiparty computation). I provide an introduction to these technologies that assumes no mathematical background or previous knowledge of cryptography. Then, I consider several speculative predictions that some researchers and engineers have made about the technologies’ long-term political significance. This includes predictions that more “privacy-preserving” forms of surveillance will become possible, that the roles of centralized institutions ranging from banks to voting authorities will shrink, and that new transnational institutions known as “decentralized autonomous organizations” will emerge. Finally, I close by discussing some challenges that are likely to limit the significance of emerging cryptographic technologies. On the basis of these challenges, it is premature to predict that any of them will approach the transformativeness of previous technologies. However, this remains a rapidly developing area well worth following.Source:https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/617938781d1308004d007e2d_Garfinkel_Tour_Of_Emerging_Cryptographic_Technologies.pdfNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more31minPlay
May 13, 2023Historical Case Studies of Technology Governance and International AgreementsThe following excerpts summarize historical case studies that are arguably informative for AI governance. The case studies span nuclear arms control, militaries’ adoption of electricity, and environmental agreements. (For ease of reading, we have edited the formatting of the following excerpts and added bolding.)Source:https://aisafetyfundamentals.com/governance-blog/historical-case-studiesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more36minPlay
May 13, 202312 Tentative Ideas for Us AI PolicyAbout two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[1]Many […] The original text contained 7 footnotes which were omitted from this narration.---Source: https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more10minPlay
May 13, 2023Let’s Think About Slowing Down AIIf you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).Source:https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-aiNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more1h 15minPlay
May 13, 2023What AI Companies Can Do Today to Help With the Most Important CenturyI’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.Source:https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more19minPlay
May 13, 2023OpenAI CharterOur Charter describes the principles we use to execute on OpenAI’s mission. ---Source: https://openai.com/charter--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more3minPlay
May 13, 2023LP Announcement by OpenAIWe’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission. The original text contained 1 footnote which was omitted from this narration.---Source: https://openai.com/blog/openai-lp--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more7minPlay
May 13, 2023International Institutions for Advanced AIInternational institutions may have an important role to play in ensuring advanced AI systems benefit humanity. International collaborations can unlock AI’s ability to further sustainable development, and coordination of regulatory efforts can reduce obstacles to innovation and the spread of benefits. Conversely, the potential dangerous capabilities of powerful and general-purpose AI systems create global externalities in their development and deployment, and international efforts to further responsible AI practices could help manage the risks they pose. This paper identifies a set of governance functions that could be performed at an international level to address these challenges, ranging from supporting access to frontier AI systems to setting international safety standards. It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations: 1) a Commission on Frontier AI that facilitates expert consensus on opportunities and risks from advanced AI, 2) an Advanced AI Governance Organization that sets international standards to manage global threats from advanced models, supports their implementation, and possibly monitors compliance with a future governance regime, 3) a Frontier AI Collaborative that promotes access to cutting-edge AI, and 4) an AI Safety Project that brings together leading researchers and engineers to further AI safety research. We explore the utility of these models and identify open questions about their viability.Source:https://arxiv.org/pdf/2307.04699.pdfNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website....more43minPlay
FAQs about AI Safety Fundamentals:How many episodes does AI Safety Fundamentals have?The podcast currently has 147 episodes available.