Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes. https://betterwithout.ai/fear-AI-power For much of the AI safety community, the central question has been “when will it happen?!” That is futile: we don’t have a coherent description of what “it” is, much less how “it” would come about. Fortunately, a prediction wouldn’t be useful anyway. An AI apocalypse is possible, so we should try to avert it. https://betterwithout.ai/scary-AI-when You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.