# Study Reveals AI Search Engines Mislead Users and Disregard Publisher Requests
A new investigation from the **Columbia Journalism Review’s (CJR) Tow Center for Digital Journalism** has highlighted significant worries regarding the precision and ethical standards of AI-driven search engines. This research, assessing eight AI-based search tools, indicated that these systems often **mislead users** and frequently **overlook publisher exclusion requests**, causing major complications in news distribution and attribution.
## Misinformation by AI Search Engines
The research demonstrated that AI search engines have difficulty maintaining accuracy when answering news-related inquiries. Researchers **Klaudia Jaźwińska and Aisvarya Chandrasekar** found that AI systems **provided incorrect responses to over 60% of news queries**.
Some of the most alarming revelations are as follows:
– **Perplexity AI** delivered wrong information in **37% of the examined queries**.
– **ChatGPT Search** incorrectly identified **67% of articles** (134 out of 200).
– **Grok 3**, a model from xAI, recorded the highest error rate, at **94%**.
These AI applications often **fabricated plausible but erroneous answers**, a process known as **confabulation**. Rather than refraining from answering when unsure, the models confidently presented **false or misleading information**, confusing users in distinguishing reality from fiction.
### Premium AI Search Tools Underperform
Interestingly, the study revealed that **paid versions** of AI search tools sometimes had poorer performance than their free versions.
– **Perplexity Pro ($20/month)** and **Grok 3’s premium offering ($40/month)** yielded **more incorrect answers** than their free counterparts.
– Although premium models managed to answer more inquiries correctly, their **greater confidence in incorrect answers** caused an overall rise in misinformation.
## Publisher Exclusion Requests Overlooked by AI Search Engines
Another significant finding from the study was that certain AI search engines **neglected publisher requests** to prevent their content from being scraped or utilized.
– **Perplexity’s free version** successfully accessed **paywalled content** from National Geographic, even though the publisher explicitly **blocked Perplexity’s web crawlers**.
– AI search engines often **redirected users to syndicated versions** of articles (e.g., Yahoo News) rather than the **original publisher’s site**.
– **Google’s Gemini and Grok 3** frequently **generated false URLs**, directing users to broken links or nonexistent pages.
This results in a **lose-lose scenario for publishers**:
– If they **block AI crawlers**, they risk **losing attribution completely**.
– If they **permit AI crawlers**, their content might be **used without appropriate credit or traffic guidance**.
## Reactions from the Industry and Future Prospects
Mark Howard, **Chief Operating Officer at Time Magazine**, voiced concerns about **transparency and control** regarding how AI search engines utilize publisher content. Nonetheless, he remains hopeful, asserting:
> “Today is the worst that the product will ever be.”
Howard believes that **continuous investments and technical advancements** will eventually result in improved AI search tools. However, he also placed some accountability on users, asserting:
> “If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.”
### Responses from AI Companies
– **OpenAI** acknowledged the study’s findings and reaffirmed its promise to **support publishers** by delivering **summaries, quotes, and attribution**.
– **Microsoft** claimed that it **follows the Robot Exclusion Protocol** and respects publishers’ guidelines.
## Conclusion
The CJR study underscores **significant issues in AI-powered search engines**, including **high rates of misinformation, citation problems, and ethical dilemmas regarding publisher control**. As AI search tools gain popularity, tackling these challenges will be essential for ensuring **accurate and accountable news distribution**.
For a more in-depth analysis of the study, visit the **[Columbia Journalism Review’s website](https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php)**.