Storrito is your autopilot forInstagram Stories

Meta Now Rejects 14 Percent of Ads for Undisclosed AI Content

preview.jpg

Meta quietly turned AI content disclosure into one of the most common reasons an ad gets rejected, and the speed of that shift caught many advertisers off guard. "Undisclosed AI Content" is now the third-largest ad rejection category across Facebook and Instagram, responsible for 14 percent of all rejections as of early 2026.

Key facts at a glance

  • "Undisclosed AI Content" accounts for 14 percent of all Meta ad rejections
  • Any AI-generated or substantially AI-modified image, video, or audio must carry an "AI-generated" label
  • Meta scans submissions for C2PA metadata and synthetic visual patterns automatically
  • The policy applies globally, driven by EU AI Act enforcement timelines
  • Teams that batch-produce ad creative with AI tools are the most exposed

Meta's New AI Content Labeling Requirement for Ads

Meta now requires an "AI-generated" label on any ad creative where AI tools generated, substantially modified, or composited visual or audio content. The policy covers images, video, and audio alike, so if a generative AI tool produced the asset or meaningfully altered it, the advertiser must disclose that during submission.

To enforce the rule, Meta deployed detection models that scan every ad submission for C2PA metadata and synthetic artifacts. C2PA is a provenance standard that embeds origin data into media files, which means that if an image was created by a tool supporting C2PA, Meta can flag it automatically. But the detection goes further than metadata alone, because it also looks for visual and audio patterns typical of AI-generated content even when no provenance data is present.

How the EU AI Act Pushed Meta Toward Global Enforcement

The timing tracks with the EU AI Act enforcement timeline. The Act requires platforms to label AI-generated content, and Meta appears to be applying these rules globally rather than maintaining separate policies for EU and non-EU markets. That approach is consistent with how Meta handled GDPR, since building one system is cheaper than building two even if it means applying stricter rules in markets where they are not yet legally required.

The 14 percent figure is striking because it appeared so quickly. Advertisers who had been using AI tools for creative production without labeling, which was common practice throughout 2024 and 2025, suddenly found their ads rejected. Many of these teams were not even aware that disclosure was expected, because Meta rolled out the requirement with limited advance notice relative to the scale of the change.

How AI Disclosure Affects Ad Creative Workflows

The practical effect is that any team running paid social on Meta's platforms now needs an AI disclosure step in their creative approval workflow, a step that did not exist six months ago. Skipping it risks rejected ads and delayed campaigns, which makes it a process problem as much as a compliance one.

This also changes the calculus for AI-generated creative more broadly. Tools like Midjourney, DALL-E, and Adobe Firefly are already standard parts of many ad production pipelines, so the question is no longer whether to use them but whether your process accounts for the labeling requirement before submission. Teams that batch-produce ad creative using AI tools are the most exposed, because a single missed label can hold up an entire campaign.

The wider implication is that platform-level AI transparency requirements are moving faster than most advertisers expected. Meta went from optional AI labels to mandatory enforcement with real rejection rates in less than a year, and other platforms are likely watching this closely. Advertisers who build disclosure into their workflow now will have a smoother transition when similar rules appear on other ad networks.

Open Questions Around Meta's AI Detection Threshold

Meta has not yet published a detailed breakdown of which AI tools trigger the most rejections, or whether the detection models produce false positives at a meaningful rate. Both questions matter for teams that use AI for background removal, color correction, or minor edits that may or may not cross Meta's threshold for "substantial modification." Until Meta clarifies where that line falls, the safest approach is to disclose any AI involvement in the creative regardless of how minor it seems.

TaylorAuthor image
Taylor
Guest Contributor

Ready to schedule your stories?