Have you felt it, too? That faint tremor running through every boardroom and startup, from Lisbon to Helsinki, as we approach the next milestone in the EU Artificial Intelligence Act saga? We’ve sprinted past speculation—now, as July 26, 2025, dawns, we’re staring at regulatory reality. The long-anticipated second phase of the EU AI Act hits in less than a week, with August 2nd the date circled in red on every compliance officer's calendar. Notably, this phase brings the first legally binding obligations for providers of general-purpose AI models—think of the likes of OpenAI or Mistral, but with strict European guardrails.
This is the moment Ursula von der Leyen, President of the European Commission, seemed to foreshadow in February when she unleashed the InvestAI initiative, a €200 billion bet to cement Europe as an "AI continent." Sure, the PR shine is dazzling, but under the glossy surface there’s a slog of bureaucracy and multi-stakeholder bickering. Over a thousand voices—industry, academia, civil society—clashed and finally hammered out the General-Purpose AI Code of Practice, submitted to the European Commission just weeks ago.
Why all the fuss over this so-called Code? It’s the cheat sheet, the Copilot, for every entity wrangling with the new regime, wrestling with transparency mandates, copyright headaches, and the ever-elusive specter of “systemic risk.” The Code is voluntary, for now, but don’t kid yourself: Brussels expects it to shape best practices and spark a compliance arms race. And, to the chagrin of lobbyists fishing for delays, the Commission rejected calls to “stop the clock.” From August 2, there’s no more grace period. The AI Act’s teeth are fully bared.
But the Act doesn’t just slam the brakes on dystopic AIs. It empowers the European AI Office, tasks a new Scientific Panel with evidence-based oversight, and requires each member state to stand up a conformity authority—think AI police for the digital realm. Fines? They bite hard: up to €35 million or 7% of global turnover if you deploy a prohibited system.
Meanwhile, debate simmers over the abandoned AI Liability Directive—a sign that harmonizing digital accountability remains the trickiest Gordian knot of all. But don’t overlook this irony: by codifying risks and thresholds, the EU’s hard rules have paradoxically driven a burst of regulatory creativity outside the EU. The UK’s Peter Kyle is pushing the Regulatory Innovation Office’s cross-jurisdictional collaboration, seeking a lighter touch, more “sandbox” than command-and-control.
So what’s next for AI in Europe and beyond? Watch the standard-setters tussle. Expect the market to stratify—major AI players compelled to disclose, mitigate, and sometimes reengineer. For AI startups dreaming of exponential scale, the new gospel is risk literacy and compliance by design. The era where ‘move fast and break things’ ruled tech is well and truly sunsetted, at least on this side of the Channel.
Thanks for tuning in. Subscribe for sharper takes, and remember: This has been a quiet please production, for more check out quiet please dot ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai