
Sign up to save your podcasts
Or


Why do investors think AI companies are like Uber or AWS (big spend for years, big profits in the end) when the truth is they’re more like WeWork, FTX and Theranos?
If you are not an investor, but someone who uses AI — please, for the love of all that is holy — divest now. Delete the app going into the holiday season, and learn to live without whatever you thought it was doing for you. Call your family or friends, read a book, or walk in the sun.
Because the damn thing is going to cost you an arm and a leg by this time next year; or it will be dead in the water and addicted users will be left to pick up the pieces.
OpenAI is nothing like Uber
There are plenty of commentators — link and link just to show I’m not making up straw men — discussing how OpenAI is like Uber, which famously burned cash for many years before becoming profitable.
As I mention in the video, Uber had a big “boots on the ground” cost: each new country they entered was massively expensive. I know, I saw it first hand. But it had a real revenue model — you pay a driver, you get a ride, Uber clips the ticket for a percentage — and although the IRL costs dwarfed their technology costs they were eventually able to see revenue greater than OpEx after many price corrections.
Here in Brisbane Australia, after Uber broke the ground with massive underwriting of rollout costs — phones for drivers, local regulations impacting drivers via traffic fines — they were the only game in town for many years. There’s no Lyft here, smaller players struggled to start.
Those local monopolies though temporary gave them a path to profit.
The “analyses” that compare Uber to AI companies and say “give them time” to reach profitability ignore the fact that there is no country by country rollout, no category killer effect for AI. There’s no “moat”.
If Alice uses AI for writing she could use ChatGPT today and whatever specialist wrapper for writers tomorrow, and then she can switch to some new startup’s product the next day. There’s no first mover advantage.
Plus as Ed Zitron has extensively reported, AI costs per inference are massive, so the unit economics are just awful. Users on $200 plans are costing OpenAI money. The latest reports of AI cash burn is that its more than Uber and AWS put together. So these comparisons are spurious, and as far as I can tell there is no path to profitability.
Financial times report on OpenAI revenue
Speaking of Ed Zitron, as a prominent commentator he’s collected financial information from AI industry insiders relating to OpenAI and Microsoft Azure cash burn. The figures published in the Financial Times show a picture that any investor should be terrified to see: the costs of inference which is just the outputs, customers of an AI SaaS product using the service, is costing so much money that any chance of closing the gap to profitability is vanishing over the horizon.
Australia and AI
Numpties in the Australian Government have announced a myopic plan to buy Sea Monkeys — err, I mean, build AI data centres — here in the lucky country.
They say its to “serve” Australians. I want to see some openness about which lobbyists Minister Ayers has been talking to.
Their road map is a road to nowhere. And I hope they realise their mistake very soon before my tax dollars go to underwrite this madness.
Theranos
OpenAI promised Artificial General Intelligence, a computer system as capable as a human — they have done exactly what Theranos did: failed to deliver on what was always a believable sounding pipe dream.
Professor Gary Marcus — who was appallingly derided by industry figures, until eventually everyone started apologising to him and saying he’d been right all along — compared OpenAI to Theranos back in 2024.
Why do people believe Sam Altman when he says a computer is going to be as capable as a human being? The LLMs store information, and every time they claim to be able to pass PhD level tests it turns out they can only do that when they’ve memorised the answers. The whole thing is a scam.
Conclusion
I’m done being polite about this. Especially when the sums of money are sufficient to ruin whole countries, and to bring the world economy to its knees.
This Dec-Jan make a New Years resolution to divest out of AI.
By Sarah SmithWhy do investors think AI companies are like Uber or AWS (big spend for years, big profits in the end) when the truth is they’re more like WeWork, FTX and Theranos?
If you are not an investor, but someone who uses AI — please, for the love of all that is holy — divest now. Delete the app going into the holiday season, and learn to live without whatever you thought it was doing for you. Call your family or friends, read a book, or walk in the sun.
Because the damn thing is going to cost you an arm and a leg by this time next year; or it will be dead in the water and addicted users will be left to pick up the pieces.
OpenAI is nothing like Uber
There are plenty of commentators — link and link just to show I’m not making up straw men — discussing how OpenAI is like Uber, which famously burned cash for many years before becoming profitable.
As I mention in the video, Uber had a big “boots on the ground” cost: each new country they entered was massively expensive. I know, I saw it first hand. But it had a real revenue model — you pay a driver, you get a ride, Uber clips the ticket for a percentage — and although the IRL costs dwarfed their technology costs they were eventually able to see revenue greater than OpEx after many price corrections.
Here in Brisbane Australia, after Uber broke the ground with massive underwriting of rollout costs — phones for drivers, local regulations impacting drivers via traffic fines — they were the only game in town for many years. There’s no Lyft here, smaller players struggled to start.
Those local monopolies though temporary gave them a path to profit.
The “analyses” that compare Uber to AI companies and say “give them time” to reach profitability ignore the fact that there is no country by country rollout, no category killer effect for AI. There’s no “moat”.
If Alice uses AI for writing she could use ChatGPT today and whatever specialist wrapper for writers tomorrow, and then she can switch to some new startup’s product the next day. There’s no first mover advantage.
Plus as Ed Zitron has extensively reported, AI costs per inference are massive, so the unit economics are just awful. Users on $200 plans are costing OpenAI money. The latest reports of AI cash burn is that its more than Uber and AWS put together. So these comparisons are spurious, and as far as I can tell there is no path to profitability.
Financial times report on OpenAI revenue
Speaking of Ed Zitron, as a prominent commentator he’s collected financial information from AI industry insiders relating to OpenAI and Microsoft Azure cash burn. The figures published in the Financial Times show a picture that any investor should be terrified to see: the costs of inference which is just the outputs, customers of an AI SaaS product using the service, is costing so much money that any chance of closing the gap to profitability is vanishing over the horizon.
Australia and AI
Numpties in the Australian Government have announced a myopic plan to buy Sea Monkeys — err, I mean, build AI data centres — here in the lucky country.
They say its to “serve” Australians. I want to see some openness about which lobbyists Minister Ayers has been talking to.
Their road map is a road to nowhere. And I hope they realise their mistake very soon before my tax dollars go to underwrite this madness.
Theranos
OpenAI promised Artificial General Intelligence, a computer system as capable as a human — they have done exactly what Theranos did: failed to deliver on what was always a believable sounding pipe dream.
Professor Gary Marcus — who was appallingly derided by industry figures, until eventually everyone started apologising to him and saying he’d been right all along — compared OpenAI to Theranos back in 2024.
Why do people believe Sam Altman when he says a computer is going to be as capable as a human being? The LLMs store information, and every time they claim to be able to pass PhD level tests it turns out they can only do that when they’ve memorised the answers. The whole thing is a scam.
Conclusion
I’m done being polite about this. Especially when the sums of money are sufficient to ruin whole countries, and to bring the world economy to its knees.
This Dec-Jan make a New Years resolution to divest out of AI.