EU AI Act and AI-Generated Images: What Creators Must Disclose in 2026
The EU AI Act's transparency obligations for AI-generated and AI-edited images went live in August 2026. Here is what creators and small publishers actually need to do.
EU AI Act and AI-Generated Images: What Creators Must Disclose in 2026
Article 50 of the EU AI Act — the transparency obligations for generative AI output — became enforceable on August 2, 2026. Most of the early reporting focused on chatbots and deepfake video, but the rules also cover still images. If you publish to an EU audience and any part of your image pipeline touches a generative model, you likely have a disclosure duty. Here is the short version without the lawyer-speak.
Who this applies to
The obligation falls on the deployer of the AI system, which in practice means the person or company publishing the image. If you are:
- A freelance designer in Berlin posting AI-generated concept art to a client's site
- A US-based publisher whose work is readable in the EU
- A marketplace seller using AI upscaling on product photos shipped to EU customers
You are in scope. The threshold is not "do you live in the EU" — it is "does your output reach EU users."
There are exemptions for personal, non-professional use and for content where the AI edit is "standard post-processing that does not substantially alter the meaning" — more on that below.
What counts as AI-generated
Article 50 distinguishes three categories and the disclosure rules differ:
| Category | Example | Disclosure required? |
|---|---|---|
| Fully AI-generated | Midjourney, DALL-E, Stable Diffusion output | Yes, explicit |
| AI-edited (substantial) | Background replacement, generative fill, face swap | Yes, explicit |
| AI-edited (standard) | Upscaling, denoising, sharpen, auto-white-balance | No, but recommended |
| Deepfake (person depicted) | Any synthetic or altered likeness | Yes, prominent label |
"Standard post-processing" is defined narrowly. The European AI Office's February 2026 guidance clarifies that AI-powered background removal, sky replacement, and object removal all fall into the "substantial" category even when the tool markets itself as "one-click."
What the disclosure actually has to say
Two things are required:
- Machine-readable metadata indicating the image was AI-generated or AI-altered
- A clear and distinguishable label visible to the viewer
The machine-readable part is where C2PA Content Credentials come in. The AI Act does not mandate C2PA by name, but the August 2026 guidance lists it as a recognized implementation. Expect this to harden into a de facto requirement.
The visible label can be a caption, a watermark, or an icon overlay. "Generated with AI" or "Contains AI-generated elements" is fine. The label has to be noticeable without the viewer hunting for it.
Upscaling and background removal
This is the murky part. If you are using AI upscaling to take a 1000px product photo to 4000px, is that "substantial"? The answer from the February guidance: it depends on whether the model hallucinates detail that was not in the source. Nearest-neighbor and bicubic upscaling are fine. Diffusion-based upscaling (which invents texture) is substantial and needs disclosure.
Practical rule of thumb:
- Bicubic / Lanczos upscale: no disclosure
- Real-ESRGAN style upscale (GAN-based): disclose
- Diffusion upscale (SDXL, Topaz Gigapixel AI): disclose
- AI background removal (rembg, Photoshop Remove BG): disclose
- AI denoising (DxO DeepPRIME, Topaz Denoise AI): recommended, not required
If you are unsure, disclose. The fines are structured as up to 3% of global annual turnover or 15M EUR, whichever is higher.
Where provenance metadata lives
C2PA writes a cryptographically signed manifest into the image file itself. The manifest records:
- The tool that generated or edited the image
- The model version (where applicable)
- A hash of the source asset if there was one
- The signing party's identity
When the image is re-saved or re-exported, tools that understand C2PA preserve the manifest. Tools that do not — including most browser-based converters and social media uploaders — strip it silently. This is the compliance gap most creators hit first.
For more on how the manifest format actually works, see our companion post on C2PA Content Credentials.
What a compliant workflow looks like
For a typical "AI upscale then convert to WebP for web" flow:
1. Source photo (original, has C2PA from camera if supported)
2. AI upscale in tool that writes C2PA (Topaz, Adobe Firefly)
3. Convert to WebP with a C2PA-preserving tool
4. Publish with visible "AI-upscaled" caption
The conversion step is where most workflows break. Most command-line tools (ImageMagick, cwebp) strip metadata by default. Konvrt's image converter preserves C2PA manifests through format conversions — WebP, AVIF, and JPEG all support the jumbf metadata box where the manifest lives.
Practical disclosure language
Short phrases that satisfy the label requirement:
- "AI-generated image"
- "Contains AI-generated elements"
- "AI-upscaled from original"
- "Background replaced with AI"
Keep it in the visible area, same typeface as your other captions. Hidden alt-text is not sufficient on its own — the regulation specifies "visible to the end user."
The takeaway: if your image pipeline includes generative AI for anything beyond contrast curves and sharpening, you need both a visible label and embedded provenance metadata by August 2026. Preserving C2PA through your conversion step is the single biggest technical fix most creators are missing.