
Sign up to save your podcasts
Or


Will the AI bubble burst or is GenAI here to stay? The artificial intelligence industry is experiencing unprecedented financial euphoria. Yet, the current situation is very confusing. AI investments are reaching dizzying heights. Let’s mention OpenAI’s $40 billion funding round at $300 billion valuation and Mistral AI’s €1.7 billion funding round. Yet, some commentators are very critical of the situation. For instance, Ed Zitron predicts that the AI bubble will burst in Q4 2025. All this is fueling intense, rather than rational, debate. I wanted to confront these concerns with the expertise of Bernhard Schaffrik, Principal Analyst at Forrester Research. His analysis is insightful and nuanced. In his mind, there will be some sort of correction, but at the same time, GenAI is too popular to disappear.
Forrester’s Bernhard Schaffrik is recognized as one of the most insightful experts in artificial intelligence. He provides a nuanced analysis that transcends simple financial considerations. His perspective on the AI bubble burst scenario offers first-hand insights for understanding where this transformative technology is truly heading.
The question of a potential AI bubble burst cannot receive a univocal answer. As Bernhard Schaffrik rightfully points out, it all depends on one’s perspective. This duality of vision probably constitutes one of the keys to understanding the current situation and the likelihood of an AI bubble burst.
“It’s almost impossible to get a one-sentence response from an analyst. Allow me two sentences. Number one is, of course, it always depends on the role or the profile you’re asking. If we are talking about financial investors, then yes, there are strong signals of this being a bubble because there is so much money being pumped into it—more than $120 billion US dollars in capital expenditure on AI infrastructure alone, just by the Magnificent Seven tech providers. So that bubble could burst,” explains Forrester’s expert.
This assessment gains particular relevance when considering Google’s $9 billion AI data center investment in Oklahoma for advanced AI training infrastructure.
This financial perspective, however, tells only part of the story. Technological adoption follows a different logic from financial markets, as Schaffrik confirmed during our exchange about the AI bubble burst potential.
“But now, if you put yourself in the shoes of enterprise decision makers, tech decision makers, also AI users, there are many who would say, ‘I don’t care if that bubble bursts, the technology is there, and it won’t go away.’
“Regardless of the amounts all the financial transactions surrounding the AI industry, people are actually using this technology. And they like what they are seeing. It might not be the disruptive, transformative value some are surmising. It’s probably more incremental than that, but the adoption of that technology is undeniable.”
Fortune’s analysis reveals a concerning gap between current investments and generated revenues. To justify current investments, AI companies would need to generate $40 billion in annual revenue, while they currently produce only $15 to $20 billion.
I was wondering whether this $20-25 billion gap could represent a systemic risk that could trigger an AI bubble burst.
Schaffrik remains relatively optimistic on this point: “There is still enough money in that market to back these revenue gaps at least for a while. And what I’m also seeing is that especially when it comes to the largest enterpriseson the planet, they are convinced to continue using that software. And if it comes at a premium which is decent, arguably, maybe a couple percentage points higher than what they are paying today for the software, then this seems to be acceptable.”
This acceptance of additional costs by large enterprises stems from the incremental value they perceive, even if it hasn’t yet reached the promised transformation level that might prevent an AI bubble burst scenario.
A particularly troubling element in the current ecosystem is the recent NewsGuard study revealing that major LLM systems are no longer progressing but regressing, generating more hallucinations and errors. This observation raises fundamental questions about current technology maturity and its impact on AI bubble burst predictions.
“I’m not saying that LLMs and generative AI are progressing in a linear fashion nor that this technology will be disruptive in any way, despite the promise. As we have seen with emerging technologies for decades and even centuries, it takes breakthrough technological revolutions rather than evolutions to fulfill such promises,” analyzes Schaffrik.
This vision of the current limitations of AI doesn’t diminish Bernhard’s long-term optimism: “But I’m also convinced that these breakthroughs will happen, not within the next seven, eight, nine, 12 months, but maybe in the long term. Something else will be coming up.”
One of Schaffrik’s most compelling criticisms concerns the energy efficiency of current systems. His comparison between the human brain and data centers is striking and relevant to understanding whether we’re facing an AI bubble burst.
“If we look at the amount of energy our brains are requiring to create a certain inference, and how much an LLM would require to achieve the same result with electricity, this cannot be the way forward.”
This energy inefficiency constitutes a major barrier to scalability and will require significant technological breakthroughs to overcome, potentially influencing AI bubble burst timing.
The 95% failure rate of corporate AI pilots revealed by MIT research might seem alarming and suggestive of an impending AI bubble burst. Yet Schaffrik places this figure in its historical context: “It’s quite normal. As an analyst covering innovation management, what I have been observing over time is that about 10% of all innovation-related minimum viable products, proof of concepts, pilots, will turn into a product.”
The problem would rather lie in unrealistic expectations: “Everybody rushed at it because one believed that since it’s accessible through natural language, it should be easier to deploy, to implement, and there are no drawbacks and negatives. That might explain that the failure rate is slightly higher than with technologies we saw in the past.”
This assessment aligns with Gartner’s prediction that 30% of GenAI projects will be abandoned after proof of concept by 2025 due to poor ROI and unclear business value.
Despite current limitations, Schaffrik maintains his bold prediction from his July 2025 analysis “Demystifying Artificial General Intelligence” that Artificial General Intelligence (AGI) represents “the biggest change in tech we have ever seen and is starting right now.” This vision, which could influence AI bubble burst scenarios, is structured around three maturity stages.
A crucial point raised in our discussion concerns the difference between training and experience. As I pointed out to Schaffrik, experience develops critical thinking that current LLMs don’t yet possess, which could impact AI bubble burst predictions.
“We might get to a point where most of us humans wouldn’t be able to tell if on the other side, a human or a machine is interacting with us. There will be areas where we will still be able to tell. But experience is something we could at least partially solve with more data and better data,” states Schaffrik.
The solution, according to him, lies in massive data collection: “So much of the billions of investment money flowing now into all these big companies is also to collect and curate data also from the physical environment. Bringing all this data together, will create something that mimics experience.”
The philosophical question of what humans will do when machines surpass us in thinking capabilities represents Schaffrik’s personal concern regarding potential societal implications of advanced AI, regardless of any AI bubble burst.
“That’s my personal doomsday scenario, I must say. It’s not good for us humans to just idle around. So it’s not so much a technical conversation, but more a political, societal, psychological and philosophical one. So I’m sure we are far away from this, but we are getting there.”
Regarding leadership in this transformation, Schaffrik acknowledges the complexity: “Rulers are supposed to rule. The question is more like, are they intentionally gathering a diverse set of experts who would be able to consult them? Technically, this is possible. Are they willing to? It’s another question.”
His confidence in human adaptability remains intact: “I’m still confident that once we are realizing the true dangers of certain technologies, we will start to rethink. And we have always found a way to move forward, and we will find a way this time as well.”
Our conversation with Bernhard Schaffrik reveals that the AI bubble burst question transcends simple financial considerations. While financial markets may indeed experience corrections, the underlying technology continues its unstoppable advancement.
The key insight is that we’re witnessing a fundamental shift that will persist regardless of market volatility. Schaffrik’s analysis suggests that rather than fearing an AI bubble burst, we should focus on preparing for the transformative changes ahead. The technology won’t disappear, but it will evolve in ways we can barely imagine today. As we stand at this inflection point, the question isn’t whether the AI bubble will burst, but how we’ll navigate the profound societal and technological transformations that lie ahead.
The AI bubble burst debate, ultimately, is just the beginning of a much larger conversation about our future with increasingly capable technology.
The post Is the AI Bubble About to Burst? appeared first on Marketing and Innovation.
By Visionary MarketingWill the AI bubble burst or is GenAI here to stay? The artificial intelligence industry is experiencing unprecedented financial euphoria. Yet, the current situation is very confusing. AI investments are reaching dizzying heights. Let’s mention OpenAI’s $40 billion funding round at $300 billion valuation and Mistral AI’s €1.7 billion funding round. Yet, some commentators are very critical of the situation. For instance, Ed Zitron predicts that the AI bubble will burst in Q4 2025. All this is fueling intense, rather than rational, debate. I wanted to confront these concerns with the expertise of Bernhard Schaffrik, Principal Analyst at Forrester Research. His analysis is insightful and nuanced. In his mind, there will be some sort of correction, but at the same time, GenAI is too popular to disappear.
Forrester’s Bernhard Schaffrik is recognized as one of the most insightful experts in artificial intelligence. He provides a nuanced analysis that transcends simple financial considerations. His perspective on the AI bubble burst scenario offers first-hand insights for understanding where this transformative technology is truly heading.
The question of a potential AI bubble burst cannot receive a univocal answer. As Bernhard Schaffrik rightfully points out, it all depends on one’s perspective. This duality of vision probably constitutes one of the keys to understanding the current situation and the likelihood of an AI bubble burst.
“It’s almost impossible to get a one-sentence response from an analyst. Allow me two sentences. Number one is, of course, it always depends on the role or the profile you’re asking. If we are talking about financial investors, then yes, there are strong signals of this being a bubble because there is so much money being pumped into it—more than $120 billion US dollars in capital expenditure on AI infrastructure alone, just by the Magnificent Seven tech providers. So that bubble could burst,” explains Forrester’s expert.
This assessment gains particular relevance when considering Google’s $9 billion AI data center investment in Oklahoma for advanced AI training infrastructure.
This financial perspective, however, tells only part of the story. Technological adoption follows a different logic from financial markets, as Schaffrik confirmed during our exchange about the AI bubble burst potential.
“But now, if you put yourself in the shoes of enterprise decision makers, tech decision makers, also AI users, there are many who would say, ‘I don’t care if that bubble bursts, the technology is there, and it won’t go away.’
“Regardless of the amounts all the financial transactions surrounding the AI industry, people are actually using this technology. And they like what they are seeing. It might not be the disruptive, transformative value some are surmising. It’s probably more incremental than that, but the adoption of that technology is undeniable.”
Fortune’s analysis reveals a concerning gap between current investments and generated revenues. To justify current investments, AI companies would need to generate $40 billion in annual revenue, while they currently produce only $15 to $20 billion.
I was wondering whether this $20-25 billion gap could represent a systemic risk that could trigger an AI bubble burst.
Schaffrik remains relatively optimistic on this point: “There is still enough money in that market to back these revenue gaps at least for a while. And what I’m also seeing is that especially when it comes to the largest enterpriseson the planet, they are convinced to continue using that software. And if it comes at a premium which is decent, arguably, maybe a couple percentage points higher than what they are paying today for the software, then this seems to be acceptable.”
This acceptance of additional costs by large enterprises stems from the incremental value they perceive, even if it hasn’t yet reached the promised transformation level that might prevent an AI bubble burst scenario.
A particularly troubling element in the current ecosystem is the recent NewsGuard study revealing that major LLM systems are no longer progressing but regressing, generating more hallucinations and errors. This observation raises fundamental questions about current technology maturity and its impact on AI bubble burst predictions.
“I’m not saying that LLMs and generative AI are progressing in a linear fashion nor that this technology will be disruptive in any way, despite the promise. As we have seen with emerging technologies for decades and even centuries, it takes breakthrough technological revolutions rather than evolutions to fulfill such promises,” analyzes Schaffrik.
This vision of the current limitations of AI doesn’t diminish Bernhard’s long-term optimism: “But I’m also convinced that these breakthroughs will happen, not within the next seven, eight, nine, 12 months, but maybe in the long term. Something else will be coming up.”
One of Schaffrik’s most compelling criticisms concerns the energy efficiency of current systems. His comparison between the human brain and data centers is striking and relevant to understanding whether we’re facing an AI bubble burst.
“If we look at the amount of energy our brains are requiring to create a certain inference, and how much an LLM would require to achieve the same result with electricity, this cannot be the way forward.”
This energy inefficiency constitutes a major barrier to scalability and will require significant technological breakthroughs to overcome, potentially influencing AI bubble burst timing.
The 95% failure rate of corporate AI pilots revealed by MIT research might seem alarming and suggestive of an impending AI bubble burst. Yet Schaffrik places this figure in its historical context: “It’s quite normal. As an analyst covering innovation management, what I have been observing over time is that about 10% of all innovation-related minimum viable products, proof of concepts, pilots, will turn into a product.”
The problem would rather lie in unrealistic expectations: “Everybody rushed at it because one believed that since it’s accessible through natural language, it should be easier to deploy, to implement, and there are no drawbacks and negatives. That might explain that the failure rate is slightly higher than with technologies we saw in the past.”
This assessment aligns with Gartner’s prediction that 30% of GenAI projects will be abandoned after proof of concept by 2025 due to poor ROI and unclear business value.
Despite current limitations, Schaffrik maintains his bold prediction from his July 2025 analysis “Demystifying Artificial General Intelligence” that Artificial General Intelligence (AGI) represents “the biggest change in tech we have ever seen and is starting right now.” This vision, which could influence AI bubble burst scenarios, is structured around three maturity stages.
A crucial point raised in our discussion concerns the difference between training and experience. As I pointed out to Schaffrik, experience develops critical thinking that current LLMs don’t yet possess, which could impact AI bubble burst predictions.
“We might get to a point where most of us humans wouldn’t be able to tell if on the other side, a human or a machine is interacting with us. There will be areas where we will still be able to tell. But experience is something we could at least partially solve with more data and better data,” states Schaffrik.
The solution, according to him, lies in massive data collection: “So much of the billions of investment money flowing now into all these big companies is also to collect and curate data also from the physical environment. Bringing all this data together, will create something that mimics experience.”
The philosophical question of what humans will do when machines surpass us in thinking capabilities represents Schaffrik’s personal concern regarding potential societal implications of advanced AI, regardless of any AI bubble burst.
“That’s my personal doomsday scenario, I must say. It’s not good for us humans to just idle around. So it’s not so much a technical conversation, but more a political, societal, psychological and philosophical one. So I’m sure we are far away from this, but we are getting there.”
Regarding leadership in this transformation, Schaffrik acknowledges the complexity: “Rulers are supposed to rule. The question is more like, are they intentionally gathering a diverse set of experts who would be able to consult them? Technically, this is possible. Are they willing to? It’s another question.”
His confidence in human adaptability remains intact: “I’m still confident that once we are realizing the true dangers of certain technologies, we will start to rethink. And we have always found a way to move forward, and we will find a way this time as well.”
Our conversation with Bernhard Schaffrik reveals that the AI bubble burst question transcends simple financial considerations. While financial markets may indeed experience corrections, the underlying technology continues its unstoppable advancement.
The key insight is that we’re witnessing a fundamental shift that will persist regardless of market volatility. Schaffrik’s analysis suggests that rather than fearing an AI bubble burst, we should focus on preparing for the transformative changes ahead. The technology won’t disappear, but it will evolve in ways we can barely imagine today. As we stand at this inflection point, the question isn’t whether the AI bubble will burst, but how we’ll navigate the profound societal and technological transformations that lie ahead.
The AI bubble burst debate, ultimately, is just the beginning of a much larger conversation about our future with increasingly capable technology.
The post Is the AI Bubble About to Burst? appeared first on Marketing and Innovation.