
Sign up to save your podcasts
Or
This week I'm talking with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.
In the show, we cover an intro to AI risk, thoughts on a new tier of intelligence, causally mapping goals to outcomes (and why that's dangerous), a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.
Hosted on Acast. See acast.com/privacy for more information.
This week I'm talking with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.
In the show, we cover an intro to AI risk, thoughts on a new tier of intelligence, causally mapping goals to outcomes (and why that's dangerous), a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.
Hosted on Acast. See acast.com/privacy for more information.