If you ask a search engine a question about a travel destination today, you will receive an answer in seconds. But where does the information come from? Who did the research, took the photos, gathered impressions on site? That remains in the dark. The answer comes from texts and images. Created at some point. Somewhere. By someone. This someone is systematically pushed out of the system.
The Plasselb case: an experiment with consequences
To show how AI alienates, I dared to conduct an experiment. I chose one of my most authentic photos as the test subject: four elderly men in traditional Swiss Alpine dairyman’s costumes, taken in Plasselb with a Canon EOS D1. This image symbolises home, tradition and artisanal photography.

I asked a modern AI called Nano Banana to incorporate the photo into an infographic. The result was sobering. The AI did not place the original, but ‘repainted’ it. The town sign “Plasselb” became “Alpenschatz”, a man’s tobacco pouch became a block of Swiss cheese. The AI recognised Switzerland, traditional costumes and beards – and invented a place name sign and cheese to go with it.
![Bild einer echten Quelle wird als Datenstrom zu einem Trainingsdatenset [KI-generiert (gemini)]](https://mlbrir8kaysj.i.optimole.com/cb:Sgnq.97b/w:auto/h:auto/q:mauto/f:best/https://tellerrandstories.de/wp-content/uploads/2026/02/plasselbScanner.jpg)
![Original wird von der KI in seine Bestandteile zerlegt. Elemente aus dem Foto (Tracht, Bärte, Pfeifen, Ortsschild) sind nur noch abstrakte Konzepte, die in einem digitalen Netzwerk schweben [KI-generiert (gemini)]](https://mlbrir8kaysj.i.optimole.com/cb:Sgnq.97b/w:auto/h:auto/q:mauto/f:best/https://tellerrandstories.de/wp-content/uploads/2026/02/plasselbAnalyse.jpg)
When precision is considered fake
A paradoxical cycle emerges: AI systems train with existing content from a time when the media still had local correspondents. But what happens when these sources dry up?
Over generations of AI models, the origins disappear. A photo is fragmented, the fragments flow into the training of the next model, which generates new, alienated images from them. The connection to the original research is severed. In the end, the models create digital copies that no longer have anything to do with reality: Artificial dementia.
The paradox of perfection: the ‘false positive’ trap
My experiment revealed a technical problem that should alert photographers. Google’s SynthID system labelled my authentic photo as ‘AI-generated’ – with all the unpleasant consequences for me. The reason: in order to perfectly emphasise the men in the costume, I had selectively edited them in my photo using Lightroom. This precision created statistical patterns that the algorithm categorised as ‘unnatural’. The bitter realisation: the more professionally a photographer works, the more likely it is that automated systems will label them as a forger. A technical label does not prove the truth – it is often just a blind signal.
Technical background: the provenance cross-check
In my report to Google’s AI Vulnerability Programme (issue 483464959), I proposed a solution: The integration of SynthID detection with C2PA provenance manifests (content credentials).
The problem: SynthID interprets statistical pixel anomalies as generative AI artefacts through professional masking – regardless of whether the image has a verified C2PA certificate with a documented processing history.
The solution: a comparison between the detection signal and the provenance manifest. If C2PA documents: “Local masking in Lightroom Classic v15.1.1, Canon RAW source”, the system should not misinterpret this manual processing as AI generation.
The rejection: Google’s response: “Won’t Fix (Intended Behaviour)”. The misclassification of authentic works is not considered a security risk, but accepted system behaviour.
The consequence: two competing truths – Adobe says “human”, Google says “AI”. The user is confused between the two. Trust in digital proof of origin is eroding.
The economic and institutional erosion
For 25 years, search engines and aggregators have been using content from established media. While this content trains AI systems, the authors are left empty-handed. For many travel journalists, this means the end.
Even institutions that are supposed to preserve knowledge are losing out. Wikipedia documents the death of established media, but online magazines such as Tellerrandstories are falling through the cracks despite ISSN and national library archiving. Who will document who is still doing original research in future if the relevance criteria are based on a bygone media era?
![Medium oder Website? Bei Wikipedia gehören Online-Magazine und Preisvergleichsportale in dieselbe Kategorie und der Traffic bestimmt die Relevanz [KI-generiert (gemini)]](https://mlbrir8kaysj.i.optimole.com/cb:Sgnq.97b/w:auto/h:auto/q:mauto/f:best/https://tellerrandstories.de/wp-content/uploads/2026/02/wikiRelevanz.jpg)
Authenticity as a business model
There are ways out. The Tellerrandstories model is based on financial independence: We publish parts of our research free of charge and receive fees from other media. My photos are licensed by the photo agency Alamy. This agency is also committed to ancillary copyright and source labelling. Transparency is not a luxury, but essential for credibility.
What remains to be done? A manifesto for transparency
We need new forms of labelling – not as a seal of quality, but as proof of production. Just as proof of origin is common for foodstuffs, information needs a digital “package insert”.
- Linking detection and provenance: Systems such as SynthID must be harmonised with cryptographic seals such as C2PA (Content Credentials). Only proof of the processing steps protects the authors.
- Responsibility of the platforms: Google, Meta and co. must also actively protect and remunerate the sources they use.
- Take a stand: Journalists must disclose their methodology. “I was there. I took this photo. I vouch for it with my name. “
Epilogue: The wall of code – when there is a method to the error
After my discovery, I sought dialogue. I reported the problem (issue 483464959) to the Google AI Vulnerability Programme and suggested a solution for linking to the C2PA data.
The response was prompt, automated and sobering: “Won’t Fix (Intended Behaviour)”. The answer came one minute after submission. One minute to review a multi-page technical report with a chain of evidence? Not likely. The response bears all the signs of automated triage: standard wording, references to irrelevant categories (safety bypasses, hallucinated sandbox escapes), no engagement with the actual content.
My report was not rejected because it was wrong. It was rejected because it is too common. “This is one of the most common issues reported”, the system writes. In other words, many photographers and journalists have discovered the same problem. But instead of fixing it, Google systematically sorts out these reports.
This is no longer a bug – it’s policy.
The fact that an authentic work is branded as an AI product is not a mistake for the tech giant, but ‘known behaviour’. In the logic of Silicon Valley, incorrect labelling is not a security risk, but accepted noise in the system.
For journalists and photographers, this means a loss of authorship. If the algorithm is wrong, people have no right to object. Technology defines the truth, and anyone who falls through the cracks is out of luck.
The more machines ‘protect’ reality, the easier it is for us to lose people from the picture. But this is precisely why it is more important today than ever to insist on your own authorship. A “won’t fix” must not be the last word on our work.
![Am Ende ist das Bild verschwunden. Nur noch die Information über das visuelle Original bleibterhalten. Völlig egal ist die schöpferische Leistung des Fotografen [KI-generiert (gemini)]](https://mlbrir8kaysj.i.optimole.com/cb:Sgnq.97b/w:auto/h:auto/q:mauto/f:best/https://tellerrandstories.de/wp-content/uploads/2026/02/fileNotFound.jpg)
