
Sign up to save your podcasts
Or


Hello world.
I’m an unemployed ex–Big Tech software engineer with 25+ years in the trenches of the software industry. I’ve ridden the dot-com bubble, offshoring waves, mobile revolutions, cloud migrations, and whatever we’re calling the post-pandemic tech reckoning.
And now? I’m watching the AI gold rush from the outside.
Depending on which headline you read this morning, artificial intelligence is either:
* An economically unsustainable hype machine
* A glorified autocomplete that can’t boost productivity
* Or the final exponential surge before Artificial General Intelligence automates every white-collar job in the next 12–18 months
So which is it?
Is AI just a “probabilistic parrot” repeating patterns from the internet? Or is something structurally different happening this time?
Let’s walk through what’s actually changed over the past five years in plain English.
Phase 1: The Giant Autocomplete
Early large language models like OpenAI’s GPT-3 were, at their core, probability machines.
You type:“Mary had a little…”
It predicts:“lamb.”
Not because it understands nursery rhymes. Not because it has memories. But because “lamb” statistically follows that phrase in its training data.
That training data? A massive scrape of publicly available internet text.
Impressive? Yes.Intelligent? Not really.
On its own, this was more a clever toy than economic earthquake.
Phase 2: Teaching the Parrot Manners
The next breakthrough was supervised fine-tuning.
Humans created tens of thousands of carefully written question-and-answer examples. The model was retrained on these, nudging it toward responses that felt helpful, structured, and safe.
This is how the first version of ChatGPT launched in 2022.
It was better. But still limited.
The bottleneck? Humans are expensive. Slow. Finite.
So AI researchers did what engineers always do when constrained by humans: they automated the humans.
Phase 3: Reinforcement Learning at Scale
Instead of having humans generate both questions and answers, the model could now generate answers while human experts simply judged them as good or bad.
Good answer? Increase the probability of similar responses.Bad answer? Penalize it.
This is reinforcement learning.
But even that eventually hits scale limits.
So researchers trained AI models to become evaluators learning from human feedback and then taking over the judging process. Now AI models could train future AI models.
And once humans are mostly out of the loop?
You can scale to absurd levels.
Millions. Billions. Trillions of generated tasks.Evaluated in weeks using massive data centers.
Still probability math.But at industrial scale.
Phase 4: Chain-of-Thought — The “Reasoning” Illusion
Then came reasoning models.
Instead of predicting a single answer to:
“If you have 8 pizza slices and eat 3, how many remain?”
The model breaks the problem into steps.
8 slices.Minus 3 eaten.Equals 5 left.
This “chain-of-thought” decomposition dramatically improves accuracy. Not because the AI understands pizza but because smaller probability jumps are easier than giant ones.
It’s still math.
Just math with better scaffolding.
Phase 5: Tools — From Talking to Doing
This is where things get serious.
Language models began calling tools.
Need the current temperature? Call a weather API.Need to write to a file? Call a document function.Need to push code to a repository? Trigger a Git action.
Models like Gemini, Claude, and modern ChatGPT versions now interact with external systems.
This transforms AI from:
Answer machine → Action machine.
And action is where jobs live.
Phase 6: Re-Act Architecture Thought, Act, Observe
Complex work isn’t one step. It’s dozens. Hundreds.
The “Reason + Act” (ReAct) pattern works like this:
* Think through a plan.
* Execute one step using a tool.
* Observe the result.
* Update context.
* Repeat until done.
This loop enables multi-step cognitive labor.
But there’s a catch: memory.
AI models have context windows, short-term memory buffers. Overload them, and they hallucinate or forget key constraints.
Enter the next hack.
Phase 7: Agentic Frameworks — The Harness Around the Brain
Agent systems wrap AI models in orchestration layers.
Think of it as:
* A planner agent
* Worker sub-agents
* A QA agent
* External databases storing long-term context
Each agent only sees what it needs.
One plans the website.Another writes code.Another tests it.
Some systems even simulate 24/7 “proactive” assistants using timed loops that periodically re-prompt the model — giving the illusion of autonomy.
Is it consciousness? No.
It’s math wrapped in for-loops.
But here’s the uncomfortable part:
It works.
Are We Near AGI?
No.
There is no understanding. No awareness. No inner life. No self.
We are not at the singularity.
But that doesn’t mean we’re safe.
The Productivity Question
Recent industry studies suggest frontier models like ChatGPT 5.2 and Claude 4.6 can sustain deep, complex cognitive work for over an hour with success rates exceeding 80%.
For context:
The average white-collar worker can maintain true deep focus for 60–90 minutes at a time.Across an entire day? Maybe 2–3 hours of real high-quality output.
If AI focus time doubles every six months, as some data suggests, we may soon see models capable of four-hour sustained cognitive blocks.
And they don’t need sleep.Or weekends.Or health insurance.
So… Will AI Replace White-Collar Jobs?
Here’s the honest answer:
Any job that consists primarily of interacting with a computer interface is at risk.
* Software engineering
* Accounting
* Legal research
* Financial analysis
* Marketing copy
* Project management
* Operations
AI models are being fine-tuned for each of these domains right now.
We are not watching a toy evolve.
We are watching a new kind of digital labor force scale geometrically.
The Paradox
AI is still “just” probability.
It doesn’t understand.
It doesn’t reason in the human sense.
And yet through scale, reinforcement learning, tool use, reasoning scaffolds, and Agentic orchestration, it can now perform long-duration cognitive work that looks remarkably similar to what many professionals do daily.
That’s the tension of this moment.
It’s not conscious.
But it’s competent.
Something Big Is Happening
As someone who’s spent 25 years in software, and who is currently unemployed, I don’t have the luxury of dismissing this as hype.
I’ve seen enough technology waves to recognize when the ground is actually shifting.
This one feels different.
Not because machines woke up.But because the scaffolding around them did.
If you’re watching this space with a mix of curiosity and existential dread, you’re not alone.
We’re all trying to figure out what comes next.
And whether we’ll be coding it…
Or competing with it.
By AsianDadEnergyHello world.
I’m an unemployed ex–Big Tech software engineer with 25+ years in the trenches of the software industry. I’ve ridden the dot-com bubble, offshoring waves, mobile revolutions, cloud migrations, and whatever we’re calling the post-pandemic tech reckoning.
And now? I’m watching the AI gold rush from the outside.
Depending on which headline you read this morning, artificial intelligence is either:
* An economically unsustainable hype machine
* A glorified autocomplete that can’t boost productivity
* Or the final exponential surge before Artificial General Intelligence automates every white-collar job in the next 12–18 months
So which is it?
Is AI just a “probabilistic parrot” repeating patterns from the internet? Or is something structurally different happening this time?
Let’s walk through what’s actually changed over the past five years in plain English.
Phase 1: The Giant Autocomplete
Early large language models like OpenAI’s GPT-3 were, at their core, probability machines.
You type:“Mary had a little…”
It predicts:“lamb.”
Not because it understands nursery rhymes. Not because it has memories. But because “lamb” statistically follows that phrase in its training data.
That training data? A massive scrape of publicly available internet text.
Impressive? Yes.Intelligent? Not really.
On its own, this was more a clever toy than economic earthquake.
Phase 2: Teaching the Parrot Manners
The next breakthrough was supervised fine-tuning.
Humans created tens of thousands of carefully written question-and-answer examples. The model was retrained on these, nudging it toward responses that felt helpful, structured, and safe.
This is how the first version of ChatGPT launched in 2022.
It was better. But still limited.
The bottleneck? Humans are expensive. Slow. Finite.
So AI researchers did what engineers always do when constrained by humans: they automated the humans.
Phase 3: Reinforcement Learning at Scale
Instead of having humans generate both questions and answers, the model could now generate answers while human experts simply judged them as good or bad.
Good answer? Increase the probability of similar responses.Bad answer? Penalize it.
This is reinforcement learning.
But even that eventually hits scale limits.
So researchers trained AI models to become evaluators learning from human feedback and then taking over the judging process. Now AI models could train future AI models.
And once humans are mostly out of the loop?
You can scale to absurd levels.
Millions. Billions. Trillions of generated tasks.Evaluated in weeks using massive data centers.
Still probability math.But at industrial scale.
Phase 4: Chain-of-Thought — The “Reasoning” Illusion
Then came reasoning models.
Instead of predicting a single answer to:
“If you have 8 pizza slices and eat 3, how many remain?”
The model breaks the problem into steps.
8 slices.Minus 3 eaten.Equals 5 left.
This “chain-of-thought” decomposition dramatically improves accuracy. Not because the AI understands pizza but because smaller probability jumps are easier than giant ones.
It’s still math.
Just math with better scaffolding.
Phase 5: Tools — From Talking to Doing
This is where things get serious.
Language models began calling tools.
Need the current temperature? Call a weather API.Need to write to a file? Call a document function.Need to push code to a repository? Trigger a Git action.
Models like Gemini, Claude, and modern ChatGPT versions now interact with external systems.
This transforms AI from:
Answer machine → Action machine.
And action is where jobs live.
Phase 6: Re-Act Architecture Thought, Act, Observe
Complex work isn’t one step. It’s dozens. Hundreds.
The “Reason + Act” (ReAct) pattern works like this:
* Think through a plan.
* Execute one step using a tool.
* Observe the result.
* Update context.
* Repeat until done.
This loop enables multi-step cognitive labor.
But there’s a catch: memory.
AI models have context windows, short-term memory buffers. Overload them, and they hallucinate or forget key constraints.
Enter the next hack.
Phase 7: Agentic Frameworks — The Harness Around the Brain
Agent systems wrap AI models in orchestration layers.
Think of it as:
* A planner agent
* Worker sub-agents
* A QA agent
* External databases storing long-term context
Each agent only sees what it needs.
One plans the website.Another writes code.Another tests it.
Some systems even simulate 24/7 “proactive” assistants using timed loops that periodically re-prompt the model — giving the illusion of autonomy.
Is it consciousness? No.
It’s math wrapped in for-loops.
But here’s the uncomfortable part:
It works.
Are We Near AGI?
No.
There is no understanding. No awareness. No inner life. No self.
We are not at the singularity.
But that doesn’t mean we’re safe.
The Productivity Question
Recent industry studies suggest frontier models like ChatGPT 5.2 and Claude 4.6 can sustain deep, complex cognitive work for over an hour with success rates exceeding 80%.
For context:
The average white-collar worker can maintain true deep focus for 60–90 minutes at a time.Across an entire day? Maybe 2–3 hours of real high-quality output.
If AI focus time doubles every six months, as some data suggests, we may soon see models capable of four-hour sustained cognitive blocks.
And they don’t need sleep.Or weekends.Or health insurance.
So… Will AI Replace White-Collar Jobs?
Here’s the honest answer:
Any job that consists primarily of interacting with a computer interface is at risk.
* Software engineering
* Accounting
* Legal research
* Financial analysis
* Marketing copy
* Project management
* Operations
AI models are being fine-tuned for each of these domains right now.
We are not watching a toy evolve.
We are watching a new kind of digital labor force scale geometrically.
The Paradox
AI is still “just” probability.
It doesn’t understand.
It doesn’t reason in the human sense.
And yet through scale, reinforcement learning, tool use, reasoning scaffolds, and Agentic orchestration, it can now perform long-duration cognitive work that looks remarkably similar to what many professionals do daily.
That’s the tension of this moment.
It’s not conscious.
But it’s competent.
Something Big Is Happening
As someone who’s spent 25 years in software, and who is currently unemployed, I don’t have the luxury of dismissing this as hype.
I’ve seen enough technology waves to recognize when the ground is actually shifting.
This one feels different.
Not because machines woke up.But because the scaffolding around them did.
If you’re watching this space with a mix of curiosity and existential dread, you’re not alone.
We’re all trying to figure out what comes next.
And whether we’ll be coding it…
Or competing with it.