Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
FAQs about BlueDot Narrated:How many episodes does BlueDot Narrated have?The podcast currently has 220 episodes available.
May 13, 2023Racing Through a Minefield: The AI Deployment ProblemAudio versions of blogs and papers from BlueDot courses. Push AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?Source:https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/Crossposted from the Cold Takes Audio podcast.---A podcast by BlueDot Impact....more22minPlay
May 13, 2023Choking off China’s Access to the Future of AIAudio versions of blogs and papers from BlueDot courses. Introduction On October 7, 2022, the Biden administration announced a new export controls policy on artificial intelligence (AI) and semiconductor technologies to China. These new controls—a genuine landmark in U.S.-China relations—provide the complete picture after a partial disclosure in early September generated confusion. For weeks the Biden administration has been receiving criticism in many quarters for a new round of semiconductor export control restrictions, first disclosed on September 1. The restrictions block leading U.S. AI computer chip designers, such as Nvidia and AMD, from selling their high-end chips for AI and supercomputing to China. The criticism typically goes like this: China’s domestic AI chip design companies could not win customers in China because their chip designs could not compete with Nvidia and AMD on performance. Chinese firms could not catch up to Nvidia and AMD on performance because they did not have enough customers to benefit from economies of scale and network effects. Source:https://www.csis.org/analysis/choking-chinas-access-future-aiNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more8minPlay
May 13, 2023Primer on AI Chips and AI GovernanceAudio versions of blogs and papers from BlueDot courses. If governments could regulate the large-scale use of “AI chips,” that would likely enable governments to govern frontier AI development—to decide who does it and under what rules.In this article, we will use the term “AI chips” to refer to cutting-edge, AI-specialized computer chips (such as NVIDIA’s A100 and H100 or Google’s TPUv4).Frontier AI models like GPT-4 are already trained using tens of thousands of AI chips, and trends suggest that more advanced AI will require even more computing power.Source: https://aisafetyfundamentals.com/governance-blog/primer-on-ai-chipsNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more26minPlay
May 13, 2023The State of AI in Different Countries — An OverviewAudio versions of blogs and papers from BlueDot courses. Some are concerned that regulating AI progress in one country will slow that country down, putting it at a disadvantage in a global AI arms race. Many proponents of AI regulation disagree; they have pushed back on the overall framework, pointed out serious drawbacks and limitations of racing, and argued that regulations do not have to slow progress down. Another disagreement is about whether countries are in fact in a neck and neck arms race; some believe that the United States and its allies have a significant lead which would allow for regulation even if that does come at the cost of slowing down AI progress. [1]This overview uses simple metrics and indicators to illustrate and discuss the state of frontier AI development in different countries — and relevant factors that shape how the picture might change. Source:https://aisafetyfundamentals.com/governance-blog/state-of-ai-in-different-countriesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more37minPlay
May 13, 2023What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute MonitoringAudio versions of blogs and papers from BlueDot courses. As advanced machine learning systems’ capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other’s compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework’s primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners’ models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. ’21.]Source:https://arxiv.org/pdf/2303.11341.pdfNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more33minPlay
May 13, 2023A Tour of Emerging Cryptographic TechnologiesAudio versions of blogs and papers from BlueDot courses. Historically, progress in the field of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the field may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies) and on techniques for computing on confidential data (such as secure multiparty computation). I provide an introduction to these technologies that assumes no mathematical background or previous knowledge of cryptography. Then, I consider several speculative predictions that some researchers and engineers have made about the technologies’ long-term political significance. This includes predictions that more “privacy-preserving” forms of surveillance will become possible, that the roles of centralized institutions ranging from banks to voting authorities will shrink, and that new transnational institutions known as “decentralized autonomous organizations” will emerge. Finally, I close by discussing some challenges that are likely to limit the significance of emerging cryptographic technologies. On the basis of these challenges, it is premature to predict that any of them will approach the transformativeness of previous technologies. However, this remains a rapidly developing area well worth following.Source:https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/617938781d1308004d007e2d_Garfinkel_Tour_Of_Emerging_Cryptographic_Technologies.pdfNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more31minPlay
May 13, 2023Historical Case Studies of Technology Governance and International AgreementsAudio versions of blogs and papers from BlueDot courses. The following excerpts summarize historical case studies that are arguably informative for AI governance. The case studies span nuclear arms control, militaries’ adoption of electricity, and environmental agreements. (For ease of reading, we have edited the formatting of the following excerpts and added bolding.)Source:https://aisafetyfundamentals.com/governance-blog/historical-case-studiesNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more36minPlay
May 13, 202312 Tentative Ideas for US AI PolicyAudio versions of blogs and papers from BlueDot courses. About two years ago, I wrote that “it’s difficult to know which ‘intermediate goals’ [e.g. policy goals] we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI.” Much has changed since then, and in this post I give an update on 12 ideas for US policy goals[1]Many […] The original text contained 7 footnotes which were omitted from this narration.---Source: https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy--- Narrated by TYPE III AUDIO.A podcast by BlueDot Impact....more10minPlay
May 13, 2023Let’s Think About Slowing Down AIAudio versions of blogs and papers from BlueDot courses. If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous. The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).Source:https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-aiNarrated for AGI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact....more1h 15minPlay
May 13, 2023What AI Companies Can Do Today to Help With the Most Important CenturyAudio versions of blogs and papers from BlueDot courses. I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at those companies, or people who are just curious.Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3I’ll cover:Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously).Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks.Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future.Balancing these cautionary measures with conventional/financial success.I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on.Source:https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/---A podcast by BlueDot Impact....more19minPlay
FAQs about BlueDot Narrated:How many episodes does BlueDot Narrated have?The podcast currently has 220 episodes available.