
Sign up to save your podcasts
Or


Episode Number: L018
Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play
Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation.
What’s inside this episode:
The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built.
The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query.
The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible.
Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions.
Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety.
GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters.
Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency.
This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun.
Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐
(Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)
By Claus ZeißlerEpisode Number: L018
Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play
Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation.
What’s inside this episode:
The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built.
The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query.
The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible.
Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions.
Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety.
GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters.
Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency.
This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun.
Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐
(Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)