Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Amateur read-throughs of blog posts on Cold-Takes.com, for those who prefer listening to reading. Available on Apple, Spotify, Google Podcasts, and anywhere else you listen to podcasts by searching Co... more
FAQs about Cold Takes Audio:How many episodes does Cold Takes Audio have?The podcast currently has 50 episodes available.
February 20, 2023What AI companies can do today to help with the most important centuryMajor AI companies can increase or reduce global catastrophic risks.https://www.cold-takes.com/what-ai-companies-can-do-today-to-help-with-the-most-important-century/...more19minPlay
February 10, 2023Jobs that can help with the most important centuryPeople are far better at their jobs than at anything else. Here are the best ways to help the most important century go well.https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/...more31minPlay
January 25, 2023Spreading messages to help with the most important centuryFor people who want to help improve our prospects for navigating transformative AI, and have an audience (even a small one).https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/...more21minPlay
January 13, 2023How we could stumble into AI catastropheHypothetical stories where the world tries, but fails, to avert a global disaster.https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe...more29minPlay
January 05, 2023Transformative AI issues (not just misalignment): an overviewAn overview of key potential factors (not just alignment risk) for whether things go well or poorly with transformative AI.https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/...more26minPlay
December 22, 2022Racing Through a Minefield: the AI Deployment ProblemPush AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/...more22minPlay
December 15, 2022High-level hopes for AI aligmentA few ways we might get very powerful AI systems to be safe.https://www.cold-takes.com/high-level-hopes-for-ai-alignment/...more24minPlay
December 08, 2022AI safety seems hard to measureFour analogies for why "We don't see any misbehavior by this AI" isn't enough.https://www.cold-takes.com/ai-safety-seems-hard-to-measure/...more23minPlay
November 29, 2022Why Would AI "Aim" To Defeat Humanity?Today's AI development methods risk training AIs to be deceptive, manipulative and ambitious. This might not be easy to fix as it comes up.https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/...more47minPlay
June 30, 2022The Track Record of Futurists Seems ... FineWe scored mid-20th-century sci-fi writers on nonfiction predictions. They weren't great, but weren't terrible either. Maybe doing futurism works fine.https://www.cold-takes.com/the-track-record-of-futurists-seems-fine/...more22minPlay
FAQs about Cold Takes Audio:How many episodes does Cold Takes Audio have?The podcast currently has 50 episodes available.