MIRAS and Titans: Google shows how AI models can continually learn.OpenAI must disclose 20 million ChatGPT chats to The New York Times.New security vulnerability: AI agents in GitHub and GitLab threaten enterprise workflowsWaymo’s robotaxis are under investigation for passing stopped school buses
The AI news for December 6th, 2025
--- This episode is sponsored by ---
Find our more about our today's sponsor Pickert at pickert.de.
---
Would you like to create your own AI-generated and 100% automated podcast on your chosen topic? --> Reach out to us, and we’ll make it happen.
Here are the details of the day's selected top stories:
MIRAS and Titans: Google shows how AI models can continually learn.Source: https://the-decoder.de/miras-und-titans-google-zeigt-wie-ki-modelle-dauerhaft-dazulernen-koennen/
Why did we choose this article?Covers Google’s MIRAS and Titans work on continuous learning and long‑term memory — a technically significant direction that changes how models can improve during real use. Important for builders, researchers and product teams planning persistent, evolving AI features.
OpenAI must disclose 20 million ChatGPT chats to The New York Times.Source: https://the-decoder.de/openai-muss-20-millionen-chatgpt-chats-an-new-york-times-herausgeben/
Why did we choose this article?A major legal development with direct consequences for data use, model transparency and user privacy. Signals increased legal risk for companies that train or serve LLMs — practical must‑knows for compliance, legal teams and product owners.
New security vulnerability: AI agents in GitHub and GitLab threaten enterprise workflowsSource: https://the-decoder.de/neue-sicherheitsluecke-ki-agenten-in-github-und-gitlab-gefaehrden-unternehmens-workflows/
Why did we choose this article?Flags a concrete, actionable security risk in developer CI/CD that can affect confidential code and deployments. Relevant for engineering leaders, security teams and devs — prompts immediate review of AI-agent integrations and mitigation steps (access controls, sandboxing, prompt validation).
Waymo’s robotaxis are under investigation for passing stopped school busesSource: https://www.theverge.com/news/838879/waymo-school-buses-probe
Why did we choose this article?A high‑stakes safety and regulatory story showing how real‑world AI systems interact with law, public safety and trust. Useful for anyone tracking deployment risks, AV policy, or operational safety practices in autonomous systems.
Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at
[email protected].