
Sign up to save your podcasts
Or


Nate Soares, president of the Machine Intelligence Research Institute and the co-author (with Eliezer Yudkowsky) of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025), talks about why he worries that AI "superintelligence" will lead to catastrophic outcomes, and what safeguards he recommends to prevent this.
By WNYCNate Soares, president of the Machine Intelligence Research Institute and the co-author (with Eliezer Yudkowsky) of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Little, Brown and Company, 2025), talks about why he worries that AI "superintelligence" will lead to catastrophic outcomes, and what safeguards he recommends to prevent this.