Sandcastles

The Tsunami is Coming


Listen Later

This post is meant to help animal advocates start thinking about how the artificial intelligence revolution will impact our work. If you’ve been feeling anxious about AI but weren’t sure where to begin, or if you’ve never considered that AI technology could be disruptive to animal advocacy, you’re in the right place.

Additional reading/listening

If you’re not convinced that intelligence, reasoning, and creativity are appropriate words to describe what LLMs do, here’s some options for further reading:

“Automating Creativity” by Ethan Mollick (already more than two years old, aged well but some details are out of date)

Download the DeepSeek app and try talking with the DeepThink reasoning model, which gives you full access to its Chain of Thought (CoT), the thinking it does before giving you a final answer. Experiment with easy questions and hard questions. Or just search reddit for examples like this one.

Sparks of AGI talk by Sébastien Bubeck (or the accompanying paper) – this was what GPT-5 recommended when I asked for its favorite lecture on the intelligent nature of LLMs

If you want to dig deeper on AI capabilities or help your brain think about what those capabilities could mean, here are some further readings:

Dwarkesh made this fun animated video about how an all-AI company would work

The field of AI is moving so fast that by the time anyone has written a ‘comprehensive’ update, it is out of date. Given that, this explainer of AI scaling has aged well.

Situational Awareness by Leopold Aschenbrenner goes broader and deeper on everything I’ve covered and is a favorite among AI junkies.

My go-to sources for staying informed are Zvi Mowshowitz’s newsletter, the Cognitive Revolution podcast, the 80,000 Hours podcast, and the Dwarkesh podcast.

If you want to go deeper on how weird the AI future could get, here are some links:

If you could only listen to one interview to grasp the magnitude of change AI represents, make it Prof. Ian Morris on the 80,000 Hours podcast. Many of the ideas in part 4 were borrowed from Ian.

The AI 2027 report is a detailed telling of how an AI takeover could play out. Useful to ground your imagination. 

Another episode of 80,000 hours explores how AI could enable unprecedented concentration of power via coups.

If you’re up for something academic, Gradual Disempowerment by Jan Kulviet et al spells out the reasons to expect human displacement even without coordination by a hostile actor.

The links discussed in part 5 are too numerous, so head to the post on Substack: https://sandcastlesblog.substack.com/p/the-tsunami-is-coming



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit sandcastlesblog.substack.com
...more
View all episodesView all episodes
Download on the App Store

SandcastlesBy Sandcastles