The KickstartAI Podcast

Episode 7: AI misalignment and misuse — should we really be worried?


Listen Later

In this episode, we're joined by philosopher and product architect Paul Kuyer from AxonIQ to dissect the growing concerns around AI safety: from existential risks to the immediate dangers we face today. Our conversation revolves around the “AI 2027” paper, which speculates a plausible path to catastrophe driven by artificial general intelligence (AGI). We dive into two main concerns: 

Misalignment: AI’s goals diverging from human values, potentially leading to harmful sub-goals like self-preservation.

Misuse: Bad actors using AI for malicious purposes, including cyber warfare and social manipulation.

We also discuss the path to AGI, skepticism about AI’s future capabilities, and the distinction between the human brain and neural networks. Lastly, we explore our inclination to anthropomorphize AI, and we address AI’s psychological impact, examining its sycophantic tendencies and potential for misuse in manipulating people.


Links and references:

AI 2027: https://ai-2027.com/

Transformer Circuits (Anthropic): https://transformer-circuits.pub/ 

A Disneyland without children: https://www.lesswrong.com/posts/pk9mofif2jWbc6Tv3/fiction-a-disneyland-without-children

Ilya Sutkever on the AI transformation underway: https://www.youtube.com/watch?v=UL7qQ0E2hJ0


...more
View all episodesView all episodes
Download on the App Store

The KickstartAI PodcastBy KickstartAI