
Sign up to save your podcasts
Or


👾 "How to Serve Man" — a classic Twilight Zone episode — asked a chilling question: when something promises to serve us… do we really know whose table we’re being served on?
Â
Today, that same question applies to AI Alignment.
Â
In this episode, I break down:
Â
AI won’t align to us by accident. It will only align if we put in the work—technically, ethically, and socially.
Thank you for listening and please subscribe to . . Where do we go from here? . . and tell your friends to do the same. Also keep the mail and feedback coming at [email protected], so far, I have received enough viewer mail that I anticipate providing a quick mail bag episode soon to answer some interesting questions I have received.
By Scott Catallo👾 "How to Serve Man" — a classic Twilight Zone episode — asked a chilling question: when something promises to serve us… do we really know whose table we’re being served on?
Â
Today, that same question applies to AI Alignment.
Â
In this episode, I break down:
Â
AI won’t align to us by accident. It will only align if we put in the work—technically, ethically, and socially.
Thank you for listening and please subscribe to . . Where do we go from here? . . and tell your friends to do the same. Also keep the mail and feedback coming at [email protected], so far, I have received enough viewer mail that I anticipate providing a quick mail bag episode soon to answer some interesting questions I have received.