Generative AI has radically changed content production. What sounded like science fiction just a few years ago is now part of everyday creative life in marketing departments and agencies. Photorealistic scenes, virtual brand ambassadors, synthetic voices or fully AI-generated campaign visuals.
However, responsibility also grows with the new possibilities. Because the more realistic AI content becomes, the more important the question becomes: Is this still creative innovation or is this where the deception begins? And does the content have to be labeled?
We have formulated clear guidelines on how to deal with AI content on social media and when labeling is required.
Is AI content reprehensible per se?
In short: no.
AI is a tool. No more, but also no less. It can make processes more efficient, open up new creative spaces and create visual worlds that would simply not be possible without AI.
The problem is not the technology itself, but its use.
AI content becomes critical when it:
- simulates real events that never took place
- Suggests product properties that do not exist
- imitates real people without consent (face or voice)
- contributes to misleading in sensitive contexts (health, politics, crises)
The risk therefore does not lie in the use of AI, but in a lack of transparency and a lack of guard rails.
When does AI content deceive users?
Deception occurs when users assume that they are seeing a real situation even though it has been completely or substantially artificially created.
Examples:
- A photorealistic setup that never physically existed.
- An “event” visual that never took place.
- An AI-generated face that is perceived as a real person.
- Before and after images that show unrealistic results due to AI processing.
Trust is one of the most valuable currencies, especially in brand communication. If consumers get the feeling that they are being deliberately misled, this creates a reputational risk that outweighs any short-term performance gain.
Transparency is therefore not a legal formalism, but a brand strategy.
Does AI content have to be labeled?
There is currently no uniform global AI labeling law for social media. However, it is already in force today:
- Platform guidelines (e.g. Instagram, TikTok, YouTube)
- Advertising law (keyword misleading, surreptitious advertising)
- EU regulations such as the Digital Services Act and the EU AI Regulation (with mandatory labeling from 02.08.2026)
Regardless of the final legal situation, one clear principle applies to us:
If AI is essential for the perception of the content, it should be labeled.
We distinguish between three levels:
- Fully AI-generated content (K1)
Completely generated images, videos or avatars without a real recording.
→ Labeling always required.
- Partially AI-generated content (K2)
Real images that have been significantly altered by AI, for example:
- Added backgrounds or objects
- AI-generated persons
- synthetic voices
→ Labeling required if these elements are formative for perception.
- AI in the background (K3)
AI was only used for brainstorming, text support or internal structuring.
→ No labeling necessary in the post, as the visible content is real.
This differentiation is crucial. Not every use of AI needs to be communicated. But every visible, perception-shaping generation should be made transparent.
How should it be labeled?
Where possible, the platform’s own labels should be used:
- Instagram & Facebook: “AI info”
- TikTok: “AI-generated content”
- YouTube: “Changed content”
These can be added natively to the posts on the platforms, both when posting in the app itself and as subsequent editing (e.g. after posting via third-party publishing tools).
If labeling is not possible on the platform, information should be added in the caption or in the visual itself, for example:
- “This image was created with AI.”
- “AI-generated visualization for illustration.”
- “The voice-over was generated with an AI voice.”
The important thing is that the labeling should be clear. It is not about problematizing AI or warning against it, but simply about creating transparency.
Our conclusion
AI content is neither problematic per se nor a free pass.
It is a powerful creative tool with equally great responsibility.
Brands should ask themselves three questions:
- Would our community feel deceived if they knew this content was AI-generated?
- Is the AI essential for the perception of the content?
- Would we publish the same content in good conscience without AI?
Transparency creates trust. And trust is more valuable in the long term than any short-term “wow effect” from perfect synthetic images.
The future of social media will be hybrid: real and generated.
Our task as agencies and brands is to shape this future responsibly.
Download the AI policy white paper for free now!

Author: Christoph Putz, Consultant





