We Don't Trust Anything We See Online Anymore. AI Did That.
By Ava Thornton
AI-generated images and deepfakes are destroying trust in visual media. Trust dropped from 62% to 31% in two years.
Something broke in 2025, and we're only now understanding how bad the damage is. Our collective ability to look at an image or video online and believe it's real has essentially collapsed. And AI is the primary reason.
A new study from the Reuters Institute found that trust in online visual media has dropped to 31% globally, down from 62% just two years ago. The decline tracks almost perfectly with the rise of AI image and video generation tools. People don't trust their eyes anymore, and honestly, they probably shouldn't.
The problem goes beyond deepfakes. AI-generated images are the most discussed threat, but the trust erosion comes from multiple directions at once, and 2026 is shaping up to be the year we collectively realize there's no easy fix.
## How AI Broke Visual Trust Online
The obvious culprit is [AI image generation](/glossary). Tools like Midjourney, DALL-E, Flux, and Stable Diffusion have made it trivially easy to create photorealistic images of events that never happened, places that don't exist, and people doing things they never did. Two years ago, you could usually spot AI images by looking for weird hands or distorted text. That's not reliable anymore.
But the trust problem is bigger than generated images. Consider what's happening across visual media in 2026:
AI-generated images are being used in news articles, sometimes by the publications themselves. A recent analysis by the Columbia Journalism Review found AI-generated imagery in 14% of online news articles across major outlets, often without disclosure.
Video game footage is being shared as real-world events. Clips from games like GTA 6 and Microsoft Flight Simulator are regularly posted on social media with captions claiming they show real incidents. The visual quality of modern games makes them indistinguishable from real footage at social media resolution.
Real images are being dismissed as AI-generated. This might be the most insidious effect. When authentic photojournalism from conflict zones or natural disasters is shared online, a significant portion of commenters immediately label it "AI" or "fake." The liar's dividend, where genuine evidence is discounted because fakes exist, is becoming the default assumption.
Deepfake videos are getting cheaper and better. A convincing deepfake that would have cost $10,000 to produce in 2023 can now be generated for under $50 using publicly available tools. The quality floor keeps rising.
## Why Existing Solutions Don't Work at Scale
The tech industry's response to the visual trust crisis has been watermarking and metadata. Google's SynthID embeds invisible watermarks in AI-generated images. OpenAI's DALL-E includes C2PA metadata. Adobe's Content Authenticity Initiative adds provenance information to images.
These systems work in theory. In practice, they fail for three reasons:
First, watermarks and metadata are stripped every time an image is screenshotted, compressed, or re-uploaded to social media. By the time an image goes viral, the provenance information is gone. An MIT study found that less than 2% of AI-generated images circulating on social media retain their original watermarking.
Second, only legitimate tools include watermarks. Open-source image generators, local installations, and tools from countries without AI regulation don't watermark anything. The bad actors who are most likely to misuse AI images are the least likely to use watermarked tools.
Third, detection tools have an accuracy problem. The best AI image detectors achieve about 85-90% accuracy on fresh images. That sounds good until you realize that at the scale of social media, where billions of images are shared daily, even a 10% error rate means hundreds of millions of false results. Users don't trust detectors because detectors aren't trustworthy enough.
## What Happens When Nobody Trusts What They See
The downstream effects of visual distrust are already showing up in unexpected places. Insurance companies report a 340% increase in contested claims where the authenticity of photographic evidence is challenged. Legal proceedings are increasingly requiring chain-of-custody documentation for any digital visual evidence. News organizations are spending more on verification than on original photography.
For [AI companies](/companies), the trust crisis poses an existential brand risk. Every time an AI-generated image causes harm, it reflects on the entire industry. [OpenAI](/companies/openai), [Google](/companies/google), and [Meta](/companies/meta) all have capable image generation tools, and all of them are wrestling with how to balance creative freedom against misuse potential.
The cultural shift might be the most lasting effect. We're raising a generation that defaults to disbelief when shown visual media. That's not healthy skepticism. It's corrosive cynicism that makes people vulnerable to manipulation in the opposite direction, where genuine evidence is dismissed and conspiracy narratives fill the vacuum.
Some researchers argue we need to move beyond the authenticity of individual images and toward the trustworthiness of sources. If you trust a news organization's editorial standards, you trust their images. If you don't, no amount of watermarking helps. That's not a technological solution. It's a social one.
The uncomfortable truth is that the visual trust genie is out of the bottle, and current technology can't put it back. We're going to have to build new social norms, institutional trust, and verification practices from scratch. The AI industry that created this problem has a responsibility to help solve it, but so far, the solutions on offer are band-aids on a bullet wound.
## Frequently Asked Questions
**How much has trust in online images declined?**
According to a Reuters Institute study, global trust in online visual media dropped from 62% to 31% over two years. The decline correlates closely with the widespread availability of AI image generation tools.
**Can AI-generated images be detected reliably?**
Current detection tools achieve 85-90% accuracy on fresh images, but accuracy drops significantly for compressed, screenshotted, or edited images. At the scale of social media, error rates make existing detectors unreliable for widespread use. Check our [AI tools comparison](/compare) for more on detection capabilities.
**What is the "liar's dividend" in AI imagery?**
It's the phenomenon where the existence of convincing fakes causes people to dismiss real images as AI-generated. When anyone can claim "that's AI," authentic evidence of real events gets discounted. It's one of the most damaging effects of the AI [image generation](/glossary) boom.
**What can I do to verify if an image is real?**
Check the source. Reverse image search it. Look for it on trusted news outlets. Check the metadata if available. But honestly, no single technique is foolproof in 2026. The best approach is trusting established sources rather than trying to authenticate individual images.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
DALL-E
OpenAI's text-to-image generation model.
Deepfake
AI-generated media that realistically depicts a person saying or doing something they never actually did.
Midjourney
A popular AI image generation service known for its distinctive artistic style.
OpenAI
The AI company behind ChatGPT, GPT-4, DALL-E, and Whisper.