“Google, Microsoft, and Perplexity Charged with Endorsing Scientific Racism in AI-Driven Search Outcomes”

"Google, Microsoft, and Perplexity Charged with Endorsing Scientific Racism in AI-Driven Search Outcomes"

“Google, Microsoft, and Perplexity Charged with Endorsing Scientific Racism in AI-Driven Search Outcomes”


### AI-Driven Search Engines Are Bringing Racist, Disproven Research to Light: An Increasing Concern

Artificial Intelligence (AI) has transformed numerous facets of our everyday existence, notably the way we seek information online. Prominent technology firms like Google, Microsoft, and Perplexity have incorporated AI into their search engines to deliver quicker, more precise, and contextually appropriate results. Nevertheless, an unsettling pattern has arisen: AI-driven search engines are revealing profoundly racist and discredited research, especially regarding the contentious issue of race science.

#### The Surge of AI-Enhanced Search Engines

In recent times, AI has been embedded into search engines to improve user interaction. Rather than simply presenting a list of links, AI-integrated tools such as Google’s “Overviews” or Microsoft’s “Copilot” create summaries of information, frequently aggregating data from multiple internet sources. These summaries are designed to provide users with immediate answers without the necessity of navigating through various links. However, this advantage comes with considerable hazards, particularly when AI algorithms draw from questionable or harmful sources.

#### The Example of Race Science and IQ Scores

One of the most concerning illustrations of this problem is the re-emergence of discredited race science, especially the research of Richard Lynn, a contentious figure who advocated the notion that IQ scores could validate the genetic supremacy of white individuals over nonwhite groups. Lynn’s research, which has faced widespread condemnation, has been utilized by far-right extremists and white supremacists for decades to justify racist beliefs.

Patrik Hermansson, a researcher affiliated with the UK-based anti-racism organization Hope Not Hate, experienced this dilemma firsthand while investigating the revival of race science. During his examination of the Human Diversity Foundation—a group financed by tech magnate Andrew Conru and a successor to the Pioneer Fund, known for its Nazi sympathies—Hermansson utilized Google’s AI-powered search tool to investigate IQ scores by country. To his astonishment, the AI-generated outputs were directly sourced from Lynn’s erroneous dataset.

For instance, when Hermansson searched for “Pakistan IQ,” Google’s AI tool confidently presented a result of 80. Likewise, for “Sierra Leone IQ,” the tool offered an exact figure of 45.07. These numbers were not arbitrary; they were sourced directly from Lynn’s discredited study, which has been leveraged for many years to promote white supremacy.

#### The Function of AI in Promoting Harmful Content

An investigation by **WIRED** corroborated Hermansson’s findings and indicated that other AI-powered search engines, including Microsoft’s Copilot and Perplexity, were also referencing Lynn’s work when asked about national IQ scores. While Lynn’s research has long served as a resource for extremists, the current concern is that AI could aid in disseminating these dangerous concepts to a wider audience, potentially leading to further radicalization.

Rebecca Sear, director of the Center for Culture and Evolution at Brunel University London, highlighted the perils of AI propagating such misinformation. “Blind reliance on these ‘statistics’ is profoundly troubling,” she told WIRED. “Utilization of this data not only disseminates disinformation but also bolsters the political agenda of scientific racism—the exploitation of science to advocate that racial hierarchies and disparities are natural and unavoidable.”

#### The Inaccurate Science Underlying Lynn’s Work

Richard Lynn, who passed away in 2023, was a key player in the race science movement. His work, notably his national IQ dataset, has faced extensive criticism for its flawed methodology and biased sampling. For example, Lynn’s estimate of Angola’s IQ was derived from a sample of merely 19 individuals, whereas his data for Eritrea originated from children residing in orphanages. Critics contend that Lynn’s data was systematically skewed to indicate lower IQs for African nations, thereby fueling racist ideologies.

Furthermore, IQ assessments are frequently criticized for being culturally biased, particularly when given to non-Western populations. These assessments were mainly designed for Western audiences, and their outcomes can be distorted when applied to individuals from varying cultural and socioeconomic backgrounds.

#### AI and the Dissemination of Disinformation

The challenge with AI-driven search engines endorsing this flawed research is not merely a technical oversight; it signifies a more profound issue in how AI systems are developed and how they extract information from the internet. AI algorithms often depend on vast datasets collected from the web, and if these datasets contain biased or harmful information, the AI will replicate those biases in its outcomes.

Google, for example, has admitted that its AI Overviews tool did not function as expected in this situation. “We have safeguards and policies established to guard against low-quality responses, and when we identify Overviews that diverge from our policies, we promptly take action against them,” stated Ned Adriance, a Google spokesperson. However, even after eliminating the problematic Overviews, Google’s search engine continues to amplify inaccurate figures from Lynn’s work through “featured snippets,” which display text from a website before the user clicks on the link.