
Sign up to save your podcasts
Or


Hello world.
Until recently, I was a senior engineer at a Big Tech company, with 25 years in the technology industry behind me. Today, I’m unemployed, watching the industry I grew up in sprint headlong into what feels like the largest speculative bet of its lifetime.
Not long before I was laid off, my former employer held a company-wide AI hackathon. By that point, the company had already invested billions of dollars into training frontier models and building out the infrastructure to support them. Massive data centers. Enormous training runs. A portfolio of large language models that needed, urgently, to justify their existence.
The goal of the hackathon was simple, at least on paper: come up with bold, transformative, responsible AI ideas that could, somehow, turn all of this spending into revenue.
In other words: please make the AI pay for itself.
The A-Team (and a Reality Check)
I joined a hackathon team led by a senior engineering leader—let’s call him Danny. On paper, it was the A-Team.
There was Jimmy, the Canadian tech lead who could brute-force his way through any codebase. Subash, an H-1B architect who was frighteningly sharp. Alex, a junior engineer who had survived our brutal internship program. And Lionel, a support team lead with an effortlessly charming British accent which, by the way, is an unfairly powerful asset when pitching business ideas in tech.
We brainstormed and quickly landed on what seemed like an obvious win: an AI-powered customer support agent.
The idea was straightforward. Most customer support cases are repetitive. With a large language model enhanced by Retrieval-Augmented Generation (RAG)—essentially giving the model access to proprietary internal knowledge, we believed the agent could autonomously resolve roughly 90% of incoming cases.
Within a day, we had a working proof of concept running inside a Docker container.
Feeling confident, we presented the idea to a business leader in our product line, let’s call him Leo.
Leo listened patiently. Then he dismantled the idea.
Yes, he acknowledged, the agent might handle 90% of cases. But the remaining 10%—the hard, messy, ambiguous ones were what consumed over 90% of the support team’s time. Those were the cases customers escalated. Those were the cases that mattered.
What we had built, he argued, was essentially a glorified FAQ page.
Then came the line that stuck with me: “This feels like a shiny solution in search of a problem.”
A Microcosm of the AI Industry
That moment crystallized something uncomfortable.
Despite the massive investments and the relentless internal pressure to “AI-ify” everything, it was genuinely difficult to extract real, defensible business value from AI in many domains. Outside of narrow niches with abundant training data, returns were murky at best.
That small hackathon experience now feels like a perfect microcosm of the broader AI industry.
Hundreds of billions, possibly trillions, of dollars are being poured into AI. Yet most AI initiatives today are losing money. In some cases, a lot of money. Each API call to a large language model can cost several times more to serve than it generates in revenue.
Meanwhile, the hype machine roars on.
World models. Humanoid robots. Confident proclamations that AGI is just around the corner.
Some of these efforts are legitimate research. Others feel like science fiction being aggressively monetized. If this reminds you of the dot-com bubble, you’re not wrong, except this time, the scale is orders of magnitude larger.
Financial Alchemy and Corporate Optics
The problem is that the money has already been spent. And investors want returns now.
To maintain the appearance of growth, companies resort to financial gymnastics: buying AI services from each other to simulate demand, reclassifying existing product revenue as “AI revenue” after adding superficial features, and framing mass layoffs as “AI efficiency gains” while quietly shifting work offshore.
The result is a market that looks strong on the surface but increasingly fragile underneath.
Big Tech now accounts for roughly 40–50% of the S&P 500’s total valuation. If confidence cracks, if investors realize these investments won’t pay off on the promised timelines, the unwind could be violent.
If the Bubble Bursts
If an AI collapse happens, it likely won’t be a single dramatic moment. A weaker, AI-only company could fall first. A large investor could panic. Political backlash against data centers and energy costs could accelerate sentiment shifts.
The downstream effects would be severe: an AI winter where funding dries up, market caps shrink, RSUs evaporate, and layoffs spread not just across AI teams, but across entire platforms and ecosystems.
Beyond tech, the impact would ripple outward: data centers halted, semiconductor orders canceled, real estate markets strained, financial institutions exposed. In a worst-case scenario, cascading failures could spill into the broader economy.
This isn’t a prediction. It’s a plausible risk path.
How to Cope (Not Panic)
So what can individuals, especially software engineers, do?
At work: double down on core problem-solving skills. Learn to wield AI as a tool, not fear it. Build T-shaped expertise that spans engineering, product, and business.
Outside of work: build a much larger emergency fund than traditional advice suggests. Reduce fixed expenses. Create alternative income streams: side projects, businesses, anything that isn’t tied to a single employer.
None of this is easy. And none of it is guaranteed to be necessary.
This may all amount to nothing more than the late-night musings of a laid-off engineer with too much time to think. The AI boom could continue. Stocks could soar. Everyone could get rich.
But history suggests that when investment, hype, and financial reality drift too far apart, gravity eventually reasserts itself.
For now, all we can do is stay alert, stay flexible, and remember that technological revolutions are rarely as smooth or as profitable as they look at the peak.
By AsianDadEnergyHello world.
Until recently, I was a senior engineer at a Big Tech company, with 25 years in the technology industry behind me. Today, I’m unemployed, watching the industry I grew up in sprint headlong into what feels like the largest speculative bet of its lifetime.
Not long before I was laid off, my former employer held a company-wide AI hackathon. By that point, the company had already invested billions of dollars into training frontier models and building out the infrastructure to support them. Massive data centers. Enormous training runs. A portfolio of large language models that needed, urgently, to justify their existence.
The goal of the hackathon was simple, at least on paper: come up with bold, transformative, responsible AI ideas that could, somehow, turn all of this spending into revenue.
In other words: please make the AI pay for itself.
The A-Team (and a Reality Check)
I joined a hackathon team led by a senior engineering leader—let’s call him Danny. On paper, it was the A-Team.
There was Jimmy, the Canadian tech lead who could brute-force his way through any codebase. Subash, an H-1B architect who was frighteningly sharp. Alex, a junior engineer who had survived our brutal internship program. And Lionel, a support team lead with an effortlessly charming British accent which, by the way, is an unfairly powerful asset when pitching business ideas in tech.
We brainstormed and quickly landed on what seemed like an obvious win: an AI-powered customer support agent.
The idea was straightforward. Most customer support cases are repetitive. With a large language model enhanced by Retrieval-Augmented Generation (RAG)—essentially giving the model access to proprietary internal knowledge, we believed the agent could autonomously resolve roughly 90% of incoming cases.
Within a day, we had a working proof of concept running inside a Docker container.
Feeling confident, we presented the idea to a business leader in our product line, let’s call him Leo.
Leo listened patiently. Then he dismantled the idea.
Yes, he acknowledged, the agent might handle 90% of cases. But the remaining 10%—the hard, messy, ambiguous ones were what consumed over 90% of the support team’s time. Those were the cases customers escalated. Those were the cases that mattered.
What we had built, he argued, was essentially a glorified FAQ page.
Then came the line that stuck with me: “This feels like a shiny solution in search of a problem.”
A Microcosm of the AI Industry
That moment crystallized something uncomfortable.
Despite the massive investments and the relentless internal pressure to “AI-ify” everything, it was genuinely difficult to extract real, defensible business value from AI in many domains. Outside of narrow niches with abundant training data, returns were murky at best.
That small hackathon experience now feels like a perfect microcosm of the broader AI industry.
Hundreds of billions, possibly trillions, of dollars are being poured into AI. Yet most AI initiatives today are losing money. In some cases, a lot of money. Each API call to a large language model can cost several times more to serve than it generates in revenue.
Meanwhile, the hype machine roars on.
World models. Humanoid robots. Confident proclamations that AGI is just around the corner.
Some of these efforts are legitimate research. Others feel like science fiction being aggressively monetized. If this reminds you of the dot-com bubble, you’re not wrong, except this time, the scale is orders of magnitude larger.
Financial Alchemy and Corporate Optics
The problem is that the money has already been spent. And investors want returns now.
To maintain the appearance of growth, companies resort to financial gymnastics: buying AI services from each other to simulate demand, reclassifying existing product revenue as “AI revenue” after adding superficial features, and framing mass layoffs as “AI efficiency gains” while quietly shifting work offshore.
The result is a market that looks strong on the surface but increasingly fragile underneath.
Big Tech now accounts for roughly 40–50% of the S&P 500’s total valuation. If confidence cracks, if investors realize these investments won’t pay off on the promised timelines, the unwind could be violent.
If the Bubble Bursts
If an AI collapse happens, it likely won’t be a single dramatic moment. A weaker, AI-only company could fall first. A large investor could panic. Political backlash against data centers and energy costs could accelerate sentiment shifts.
The downstream effects would be severe: an AI winter where funding dries up, market caps shrink, RSUs evaporate, and layoffs spread not just across AI teams, but across entire platforms and ecosystems.
Beyond tech, the impact would ripple outward: data centers halted, semiconductor orders canceled, real estate markets strained, financial institutions exposed. In a worst-case scenario, cascading failures could spill into the broader economy.
This isn’t a prediction. It’s a plausible risk path.
How to Cope (Not Panic)
So what can individuals, especially software engineers, do?
At work: double down on core problem-solving skills. Learn to wield AI as a tool, not fear it. Build T-shaped expertise that spans engineering, product, and business.
Outside of work: build a much larger emergency fund than traditional advice suggests. Reduce fixed expenses. Create alternative income streams: side projects, businesses, anything that isn’t tied to a single employer.
None of this is easy. And none of it is guaranteed to be necessary.
This may all amount to nothing more than the late-night musings of a laid-off engineer with too much time to think. The AI boom could continue. Stocks could soar. Everyone could get rich.
But history suggests that when investment, hype, and financial reality drift too far apart, gravity eventually reasserts itself.
For now, all we can do is stay alert, stay flexible, and remember that technological revolutions are rarely as smooth or as profitable as they look at the peak.