
Sign up to save your podcasts
Or


Are you using an in-house tool powered by an AI model from OpenAI, Google, or Meta to produce marketing copy? You might soon be responsible for watermarking your AI-generated content.
In this episode of the Privacy Partnership Podcast, Rob explores a common scenario, where a company fine-tunes a general-purpose AI model and builds a simple internal tool for staff to generate copy in its own brand voice.
While the company might think it is simply a "deployer" of OpenAI's general-purpose AI model, it could actually be the provider of an AI system under the AI Act.
This matters because Article 50 of the AI Act introduces a wide-ranging transparency requirement: The provider of an AI system that generates synthetic text, video, images, or audio must ensure the output is detectable as AI-generated.
While there are widely adopted image and video watermarking techniques, reliably watermarking text is more difficult.
This legal obligation kicks in from 2 August 2026. And importantly, the AI Act’s grandfathering clauses don’t appear to cover Article 50 systems. So the requirement may apply retroactively to systems already in use by that date.
The AI Office is supposed to issue codes of practice in this area, but there’s been no obvious progress on this task so far.
These obligations appear to have been designed with "big tech" in mind, but they can apply to much smaller organisations too.
Watch the video for a breakdown of how an organisation can easily become a "provider" under the AI Act, how "models" differ from "systems", and why many companies might end in-scope of Article 50.
Let me know if you need help navigating these or other AI Act requirements.
By treborjnametab1Are you using an in-house tool powered by an AI model from OpenAI, Google, or Meta to produce marketing copy? You might soon be responsible for watermarking your AI-generated content.
In this episode of the Privacy Partnership Podcast, Rob explores a common scenario, where a company fine-tunes a general-purpose AI model and builds a simple internal tool for staff to generate copy in its own brand voice.
While the company might think it is simply a "deployer" of OpenAI's general-purpose AI model, it could actually be the provider of an AI system under the AI Act.
This matters because Article 50 of the AI Act introduces a wide-ranging transparency requirement: The provider of an AI system that generates synthetic text, video, images, or audio must ensure the output is detectable as AI-generated.
While there are widely adopted image and video watermarking techniques, reliably watermarking text is more difficult.
This legal obligation kicks in from 2 August 2026. And importantly, the AI Act’s grandfathering clauses don’t appear to cover Article 50 systems. So the requirement may apply retroactively to systems already in use by that date.
The AI Office is supposed to issue codes of practice in this area, but there’s been no obvious progress on this task so far.
These obligations appear to have been designed with "big tech" in mind, but they can apply to much smaller organisations too.
Watch the video for a breakdown of how an organisation can easily become a "provider" under the AI Act, how "models" differ from "systems", and why many companies might end in-scope of Article 50.
Let me know if you need help navigating these or other AI Act requirements.