AI: Boosting Speed and Productivity, but Impairing Critical Thinking

AI: Boosting Speed and Productivity, but Impairing Critical Thinking

4 Min Read

AI pervades our lives, with mounting pressure to embrace it, yet evidence of its positive impact on human intelligence is diminishing. On January 1, 2026, programmer Steve Yegge introduced an open-source platform called Gas Town that allows users to deploy AI coding agents in bulk, creating software faster than humans can. One early user remarked on the overwhelming nature of the experience, citing stress rather than increased productivity. This sentiment should resonate in executive boardrooms and industry events where “intelligence” is overly touted. The accelerating pace of machines is leaving humans weary, stressed, and seemingly less apt at critical thinking—a trait AI was supposed to nurture.

The insistence on AI adoption has birthed a coercive jargon:

You need AI.
You must use AI.
You have to purchase AI.
Competitors use it.
Children will lag without it.

This rhetoric originates not from problem-solving engineers, but from fervent earnings calls and product launches. At the World Economic Forum in January 2026, Microsoft CEO Satya Nadella stressed that AI must show real-world benefits to retain its “social permission” amid massive energy consumption. This highlights a shift: instead of proving technological efficacy, the focus is on convincing the public of AI’s potential. Nadella equated AI to a “cognitive amplifier” with “access to infinite minds.”

However, a Circana survey noted 35% of U.S. consumers don’t want AI in their devices, not due to technophobia, but because they deem it unnecessary. In March 2026, a Goldman Sachs analysis found no significant productivity correlation with AI adoption economy-wide. Though 70% of S&P 500 companies discussed AI in earnings calls, just 10% provided its impact metrics, with only 1% specifying earnings effects. Meanwhile, the top five US tech firms are projected to invest $667 billion in AI infrastructure by 2026, a 62% rise from the prior year. The National Bureau of Economic Research termed this a “productivity paradox”: gains appearing larger in perception than in reality.

Actual productivity boosts are narrowly focused, with a median 30% improvement in customer support and software development sectors. Outside these areas, widespread enhancement is absent, reducing the promised revolution to specific niches. Yet, even in these proficient areas, drawbacks emerge. A February 2026 study by UC Berkeley’s Haas School of Business found AI increased workloads in a 200-person tech firm, escalating task expectations, broadening job scopes, and eventually leading to worker exhaustion.

The cycle, labeled “workload creep,” results in unnoticed task accumulation until cognitive fatigue impairs decision quality. Harvard Business Review dubbed this “AI brain fry,” with 14% of AI users reporting diminished concentration, slower decision-making, and post-interaction headaches. Those most affected aren’t skeptics or late adopters; they are enthusiasts doing what industry leaders urge.

Fatigue doesn’t affect everyone equally. 62% of associates and 61% of entry-level employees reported AI-related burnout, versus just 38% of C-suite executives. It’s a predictable pattern: decision-makers aren’t the ones managing AI’s outputs and rectifying its flaws daily.

This raises a critical question: what do we mean by “intelligence”? The term “artificial intelligence,” coined in 1956, was as much a marketing ploy as a scientific concept, equating computation with cognition. AI systems excel at statistical prediction but not at intelligence as humans define it—judgement, reflection, and contemplation. The blend between these definitions fuels the commercial AI endeavor.

Paradoxically, the surge in artificial intelligence risks undermining true human intelligence, which requires conditions the AI industry disrupts—sustained focus, ambiguity tolerance, contemplation before problem-solving, and mental space for doubt and reconsideration. A February 2026 London School of Economics paper argued the urgency surrounding AI detracts from democratic dialogue, risking the collective future decisions we should deliberate.

The irony is sharp: while machines operate at extraordinary speeds, human users face mental clutter, impaired focus, and declining cognitive function. A senior engineering manager reported in the BCG study saw his role shift from problem-solving to tool management, citing “mental clutter” from constant AI tool juggling.

Not everyone complies; a third of consumers have rejected AI in their personal devices. Organizations prioritizing work-life balance see a 28% reduction in AI fatigue, suggesting the issue stems from the culture of incessant AI adoption rather than the technology itself.

It’s not AI’s utility in question—it is useful in particular contexts—but whether the frenzy for adoption truly enhances intelligence or merely enforces compliance. With $67 billion in quarterly investments and pervasive AI references, the January survey’s common retort, “I do not need it,” reflects a potent truth about AI. Whether attention spans are long enough to heed this insight remains to be seen.

You might also like