“Google’s Innovative ‘Reasoning AI’ Could Aid in Fighting Hawk Tuah Spam on Google Maps”

"Google's Innovative 'Reasoning AI' Could Aid in Fighting Hawk Tuah Spam on Google Maps"

“Google’s Innovative ‘Reasoning AI’ Could Aid in Fighting Hawk Tuah Spam on Google Maps”

# Google’s Aspirations in AI: Can It Truly Reason Like Humans?

A recent article by [Bloomberg](https://finance.yahoo.com/news/google-working-reasoning-ai-chasing-110027962.html) disclosed that Google is striving to create artificial intelligence (AI) software that can reason similarly to humans. This ambitious initiative forms part of Google’s ongoing rivalry with OpenAI, a leader in AI innovation known for its popular GPT models. Nevertheless, this revelation has ignited a blend of enthusiasm and doubt, particularly in light of Google’s recent difficulties with fundamental features in several of its essential products, like Google Maps.

## The Hawk Tuah Spam Dilemma on Google Maps

A striking illustration of Google’s obstacles with its current offerings is the persistent spam problem on Google Maps, notably the peculiar renaming of schools and various places to “Hawk Tuah High School” or variants thereof. This issue has been reported in multiple regions, including the U.S. and Europe, leaving numerous users perplexed. Despite the straightforward nature of this problem, Google has been sluggish to tackle it, casting doubt on its capability to manage ongoing services, much less advance AI systems that can replicate human reasoning.

As [Lily Ray](https://twitter.com/lilyraynyc/status/1839934376890576967) noted on Twitter, “Schools are being mysteriously renamed to ‘Hawk Tuah High School.’ What makes this spam disturbing is that many of these changes are being accepted by Google Maps, raising concerns about how effectively Google reviews map edits.” This predicament has persisted for months, with minimal significant media attention, despite its global impact on users.

### A Wider Issue with Google’s Key Products

The Hawk Tuah spam dilemma exemplifies a broader trend of declining quality within Google’s essential products. Google Search, once lauded as the pinnacle of online information retrieval, has faced criticism for becoming cluttered with ads and subpar content. Ed Zitron’s article, [*The Man Who Killed Google Search*](https://www.wheresyoured.at/the-men-who-killed-google/), explores how Google’s search engine has regressed over time, with users increasingly exasperated by the dominance of SEO-optimized yet low-value content.

In addition to these challenges, Google’s AI frameworks have also come under scrutiny for generating bizarre and occasionally inappropriate outcomes. For instance, Google’s Gemini AI, intended for image generation, faced backlash for producing unsuitable content, such as ethnically diverse portrayals of Nazis. These “AI hallucinations,” where the model yields incorrect or absurd information, have caused apprehension regarding the dependability of Google’s AI technologies.

## Can Google Truly Achieve Human-Like Reasoning in AI?

Amid these ongoing challenges, skepticism abounds regarding Google’s capacity to engineer AI that can reason like a human. Despite tremendous advancements in AI in recent years, the intricacy of human reasoning remains significantly beyond the reach of present models. Human reasoning encompasses not just logical thought but also emotional intelligence, cultural knowledge, and the ability to navigate ambiguous circumstances—domains where AI continues to struggle.

Furthermore, Google’s history with its current AI systems casts further doubts. Notably, Google’s AI has been known to generate “hallucinations,” where the system produces incorrect or meaningless information. In a notorious instance, Google’s AI suggested gluing pizza, a proposal that was widely mocked online. If Google’s AI cannot reliably produce simple, logical recommendations, how can it be anticipated to tackle the complexities of human reasoning?

### The Competition with OpenAI

Google’s effort to cultivate human-like reasoning in AI is part of its broader competition with OpenAI, which has garnered substantial attention with its GPT models. For example, OpenAI’s GPT-4 can generate coherent and contextually relevant text, yet even it fails to achieve true human reasoning. Both companies are racing to create more sophisticated AI systems, but the hurdles are significant.

OpenAI’s CEO, Sam Altman, has faced both accolades and criticism for his AI development strategies. Some have voiced concerns about the ethical ramifications of AI, while others have questioned whether the excitement surrounding AI is warranted. Regardless, the rivalry between Google and OpenAI is poised to influence the landscape of AI development for the foreseeable future.

## Conclusion: A Challenging Path Ahead for Google

While Google’s goal to engineer AI that can reason like a human is undoubtedly thrilling, the company’s recent failures with basic functionalities in its core products pose crucial questions about its capacity to realize this ambition. The Hawk Tuah spam dilemma on Google Maps, the quality decline in Google Search, and the absurd outputs from its AI systems all indicate a company grappling with maintaining the standards of its existing services.

As Google continues to allocate resources to AI