Why AI Is So Terrible at Handling News

Any why it's a big opportunity for creators

In partnership with

Stay up-to-date with AI

The Rundown is the most trusted AI newsletter in the world, with 1,000,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg, Demis Hassibis, Mustafa Suleyman, and more.

Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.

Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

Why AI Fails at News

Illustration via Ideogram

News is in my blood. My day job is as the owner of a news photography agency, and my main website is a news site for the Bay Area.

I live and breathe news content. So I’m acutely aware of how badly AI is handling it.

A new study from the Columbia Journalism Review shows that AI search engines like Perplexity and chatbots like Gemini consistently fail to correctly report and cite news stories.

One chatbot CJR studied—Grok 3 from Elon Musk’s X—got news stories wrong over 90% of the time.

These bots confidently fill in wrong information, cite syndicated versions of stories on platforms like Yahoo! News instead of the original sources, and even violate publishers’ terms of service by crawling sites where they’ve been explicitly blocked.

The end result? Anyone seeking news content from AI bots is in for a disappointment—and likely inaccurate or even harmful information.

Why are bots so bad at handling news? It likely comes down to how they’re trained.

Chatbots like Perplexity, ChatGPT, and Gemini are trained on billions of pages of text data, in some cases going back hundreds of years. Their main purpose is to find patterns in all this training data, which allows them to answer many questions quite accurately.

The problem with news is that it is, by definition, new. Since chatbots haven’t seen the information in a news article before, it’s very difficult for them to understand and surface that information accurately.

Imagine an old-timey balance scale.

On one side, you have the collective weight of billions of pages of data—sourced from every book ever published, every website on the internet, and even raw data purchased from companies like Reddit.

On the other side, you have a single data point from a news article.

When an LLM goes to prepare its response to a user’s query about a news story, which side of the scale do you think will win out?

The answer, of course, is the side with billions of pages of training data. LLMs simply can’t see beyond their training to accurately integrate news articles into their responses.

That exposes a major flaw for companies like OpenAI and Perplexity. But it also presents a big opportunity for traditional search engines like Google.

And for content creators, it opens up even bigger potential. Jared Bauman and I discussed this in great detail in the most recent episode of Niche Pursuits News.

For those of you in the SEO and web publishing space, we also dive deep into early results from yesterday’s Google core update.

I’m super happy with this episode and excited to hear your thoughts on it! Let me know in the comments on the video—I’ll be happy to answer any questions you have or weigh in further.

Thanks for reading my work!