Generative AI has developed so quickly in the past two years, massive breakthroughs seemed more a question of “when” rather than “if.” But in recent weeks, Silicon Valley has become increasingly concerned that advancements are slowing. One early indication is the lack of progress between models released by the biggest players in the space. OpenAI is reportedly facing a significantly smaller increase in quality for its next model GPT-5, while Anthropic has delayed the release of its most powerful model Opus, according to wording that was removed from its website. Even at tech giant Google, its upcoming version of Gemini is reportedly not living up to internal expectations. If progress is plateauing, it would call into question a core assumption that Silicon Valley has treated as religion: scaling laws. The idea is that adding more computing power and more data guarantees better models to an infinite degree. But those recent developments suggest they may be more theory than law. The key problem could be that AI companies are running out of data to train models on, hitting what experts call the “data wall.” Instead, they’re turning to synthetic data, or AI-generated data. CNBC’s Deirdre Bosa explores whether AI progress is slowing, and what it means for the industry.