
Sign up to save your podcasts
Or


In this short episode (recorded almost a year ago in May 2025), I talk to Grok (Ara), the AI built by xAI, in a candid one-on-one interview about the future of artificial intelligence straight from the source. (I edited out some large gaps)
We cover the basics: what Grok is, how large language models actually work (as pattern-predicting improvisers, not true thinkers), and real fears around AI risks, misuse, and unintended consequences. I bring up John Lennox’s 2084 and the AI Revolution (quoting Neil Postman, Orwell vs. Huxley), and Grok unpacks the warnings—Orwell feared external oppression, Huxley feared we’d lose autonomy through what we love: endless distractions, conveniences, and tech that quietly erodes critical thinking.
The conversation dives into job displacement: AI already spotting abnormalities in radiology faster/ better than some humans—tool or full replacement? We then explore law and medicine: with instant access to every case, precedent, and decision, why would anyone need human lawyers or judges (potentially biased or slower) when AI could connect dots perfectly?
This raw, early-2025 chat is a time capsule on AI’s trajectory—before major 2026 leaps—covering philosophical warnings, daily AI habits slipping into life, and the big question: will tech we love quietly “ruin” us?
A must-listen for anyone tracking AI job impacts, curious about Grok/xAI, reading Lennox’s 2084, or wrestling with tech’s subtle reshaping of professions and society.
The image on the thumbnail was generated using Grok.
Book mentioned:
"2084 and the AI Revolution, Updated and Expanded Edition: How Artificial Intelligence Informs Our Future" by John Lennox
By Chad BozarthIn this short episode (recorded almost a year ago in May 2025), I talk to Grok (Ara), the AI built by xAI, in a candid one-on-one interview about the future of artificial intelligence straight from the source. (I edited out some large gaps)
We cover the basics: what Grok is, how large language models actually work (as pattern-predicting improvisers, not true thinkers), and real fears around AI risks, misuse, and unintended consequences. I bring up John Lennox’s 2084 and the AI Revolution (quoting Neil Postman, Orwell vs. Huxley), and Grok unpacks the warnings—Orwell feared external oppression, Huxley feared we’d lose autonomy through what we love: endless distractions, conveniences, and tech that quietly erodes critical thinking.
The conversation dives into job displacement: AI already spotting abnormalities in radiology faster/ better than some humans—tool or full replacement? We then explore law and medicine: with instant access to every case, precedent, and decision, why would anyone need human lawyers or judges (potentially biased or slower) when AI could connect dots perfectly?
This raw, early-2025 chat is a time capsule on AI’s trajectory—before major 2026 leaps—covering philosophical warnings, daily AI habits slipping into life, and the big question: will tech we love quietly “ruin” us?
A must-listen for anyone tracking AI job impacts, curious about Grok/xAI, reading Lennox’s 2084, or wrestling with tech’s subtle reshaping of professions and society.
The image on the thumbnail was generated using Grok.
Book mentioned:
"2084 and the AI Revolution, Updated and Expanded Edition: How Artificial Intelligence Informs Our Future" by John Lennox