The original disappears. Won’t Fix.

If you ask a search engine a question about a travel destination today, you will receive an answer in seconds. But where does the information come from? Who did the research, took the photos, gathered impressions on site? That remains in the dark. The answer comes from texts and images. Created at some point. Somewhere. By someone. This someone is systematically pushed out of the system.

The Plasselb case: an experiment with consequences

To show how AI alienates, I dared to conduct an experiment. I chose one of my most authentic photos as the test subject: four elderly men in traditional Swiss Alpine dairyman’s costumes, taken in Plasselb with a Canon EOS D1. This image symbolises home, tradition and artisanal photography.

Die bärtigen Männer von Gruyère (Les Barbus de la Gruyère) bei der Alpprozession in Oberschrot, Plaffeien. Sie tragen die traditionelle Tracht des Greyerzerlandes. Die Weste wird Bredzon genannt, der Sack Loyi / © Foto: Georg Berg
The bearded men of Gruyère (Les Barbus de la Gruyère) at the Alpine procession in Oberschrot, Plaffeien. They wear the traditional costume of the Gruyère region. The waistcoat is called Bredzon, the sack Loyi / © Photo: Georg Berg

I asked a modern AI called Nano Banana to incorporate the photo into an infographic. The result was sobering. The AI did not place the original, but ‘repainted’ it. The town sign “Plasselb” became “Alpenschatz”, a man’s tobacco pouch became a block of Swiss cheese. The AI recognised Switzerland, traditional costumes and beards – and invented a place name sign and cheese to go with it.

Bild einer echten Quelle wird als Datenstrom zu einem Trainingsdatenset [KI-generiert (gemini)]
Image from a real source becomes a data stream for a training data set [AI-generated (gemini)]
Original wird von der KI in seine Bestandteile zerlegt. Elemente aus dem Foto (Tracht, Bärte, Pfeifen, Ortsschild) sind nur noch abstrakte Konzepte, die in einem digitalen Netzwerk schweben [KI-generiert (gemini)]
The original is broken down into its components by the AI. Elements from the photo (costume, beards, pipes, town sign) are now just abstract concepts floating in a digital network [AI-generated (gemini)]

When precision is considered fake

A paradoxical cycle emerges: AI systems train with existing content from a time when the media still had local correspondents. But what happens when these sources dry up?

Over generations of AI models, the origins disappear. A photo is fragmented, the fragments flow into the training of the next model, which generates new, alienated images from them. The connection to the original research is severed. In the end, the models create digital copies that no longer have anything to do with reality: Artificial dementia.

The paradox of perfection: the ‘false positive’ trap

My experiment revealed a technical problem that should alert photographers. Google’s SynthID system labelled my authentic photo as ‘AI-generated’ – with all the unpleasant consequences for me. The reason: in order to perfectly emphasise the men in the costume, I had selectively edited them in my photo using Lightroom. This precision created statistical patterns that the algorithm categorised as ‘unnatural’. The bitter realisation: the more professionally a photographer works, the more likely it is that automated systems will label them as a forger. A technical label does not prove the truth – it is often just a blind signal.

Technical background: the provenance cross-check

In my report to Google’s AI Vulnerability Programme (issue 483464959), I proposed a solution: The integration of SynthID detection with C2PA provenance manifests (content credentials).

The problem: SynthID interprets statistical pixel anomalies as generative AI artefacts through professional masking – regardless of whether the image has a verified C2PA certificate with a documented processing history.

The solution: a comparison between the detection signal and the provenance manifest. If C2PA documents: “Local masking in Lightroom Classic v15.1.1, Canon RAW source”, the system should not misinterpret this manual processing as AI generation.

The rejection: Google’s response: “Won’t Fix (Intended Behaviour)”. The misclassification of authentic works is not considered a security risk, but accepted system behaviour.

The consequence: two competing truths – Adobe says “human”, Google says “AI”. The user is confused between the two. Trust in digital proof of origin is eroding.

Detailed report available as pdf

The economic and institutional erosion

For 25 years, search engines and aggregators have been using content from established media. While this content trains AI systems, the authors are left empty-handed. For many travel journalists, this means the end.

Even institutions that are supposed to preserve knowledge are losing out. Wikipedia documents the death of established media, but online magazines such as Tellerrandstories are falling through the cracks despite ISSN and national library archiving. Who will document who is still doing original research in future if the relevance criteria are based on a bygone media era?

Medium oder Website? Bei Wikipedia gehören Online-Magazine und Preisvergleichsportale in dieselbe Kategorie und der Traffic bestimmt die Relevanz [KI-generiert (gemini)]
Medium or website? With Wikipedia, online magazines and price comparison portals belong in the same category and traffic determines relevance [AI-generated (gemini)]

Authenticity as a business model

There are ways out. The Tellerrandstories model is based on financial independence: We publish parts of our research free of charge and receive fees from other media. My photos are licensed by the photo agency Alamy. This agency is also committed to ancillary copyright and source labelling. Transparency is not a luxury, but essential for credibility.

What remains to be done? A manifesto for transparency

We need new forms of labelling – not as a seal of quality, but as proof of production. Just as proof of origin is common for foodstuffs, information needs a digital “package insert”.

  1. Linking detection and provenance: Systems such as SynthID must be harmonised with cryptographic seals such as C2PA (Content Credentials). Only proof of the processing steps protects the authors.
  2. Responsibility of the platforms: Google, Meta and co. must also actively protect and remunerate the sources they use.
  3. Take a stand: Journalists must disclose their methodology. “I was there. I took this photo. I vouch for it with my name. “

Epilogue: The wall of code – when there is a method to the error

After my discovery, I sought dialogue. I reported the problem (issue 483464959) to the Google AI Vulnerability Programme and suggested a solution for linking to the C2PA data.

The response was prompt, automated and sobering: “Won’t Fix (Intended Behaviour)”. The answer came one minute after submission. One minute to review a multi-page technical report with a chain of evidence? Not likely. The response bears all the signs of automated triage: standard wording, references to irrelevant categories (safety bypasses, hallucinated sandbox escapes), no engagement with the actual content.

My report was not rejected because it was wrong. It was rejected because it is too common. “This is one of the most common issues reported”, the system writes. In other words, many photographers and journalists have discovered the same problem. But instead of fixing it, Google systematically sorts out these reports.

This is no longer a bug – it’s policy.

The fact that an authentic work is branded as an AI product is not a mistake for the tech giant, but ‘known behaviour’. In the logic of Silicon Valley, incorrect labelling is not a security risk, but accepted noise in the system.

For journalists and photographers, this means a loss of authorship. If the algorithm is wrong, people have no right to object. Technology defines the truth, and anyone who falls through the cracks is out of luck.

The more machines ‘protect’ reality, the easier it is for us to lose people from the picture. But this is precisely why it is more important today than ever to insist on your own authorship. A “won’t fix” must not be the last word on our work.

Am Ende ist das Bild verschwunden. Nur noch die Information über das visuelle Original bleibterhalten. Völlig egal ist die schöpferische Leistung des Fotografen [KI-generiert (gemini)]
In the end, the image has disappeared. Only the information about the visual original remains. The creative achievement of the photographer is completely irrelevant [AI-generated (gemini)]
Content Protection by DMCA.com
Wait a minute! Photos on Tellerrand-Stories

Our mode of operation is characterized by self-experienced, well-researched text work and professional, vivid photography. For all stories, travel impressions and photos are created in the same place. Thus, the photos complement and support what is read and carry it further.

Never miss new Tellerrand-Stories again! Mithilfe eines Feed-Readers lassen sich die Information über neue Blogartikel in Echtzeit abonnieren With the help of a feed reader, all stories about the Tellerrand (edge of the plate) can be subscribed to in real time.

Permalink of the original version in German: https://tellerrandstories.de/journalismus-quelle-verschwindet