The drug discovery revolution is genuine but highly exaggerated, with health chatbots posing documented risks, and key diseases remaining unsolved.
In late 2025 at Novartis, a team working on Huntington’s disease used generative AI to design 15 million potential compounds for a molecule type called a molecular glue degrader, which could cross the blood-brain barrier and target a protein linked to the illness.
From these candidates, about 60 were synthesized, leading to a promising scaffold now being optimized. An achievement of computational triage, but not a cure for Huntington’s disease.
The gap between AI’s lab capabilities and patient delivery defines health technology’s tension in 2026. The industry’s revolutionary language contrasts with the evidence of gradual, uncertain, and often disappointing progress.
Over 40 million people daily type symptoms into ChatGPT, with warnings from patient safety organizations about the dangers of this technology use.
The pitch for AI in drug discovery is tempting and, in narrow terms, accurate. Conventional drug development takes 10 to 15 years and costs around $2.5 billion per successful compound, with about 90 percent failing in clinical trials.
AI can shorten early discovery timelines by 30-40% and reduce preclinical development from three or four years to 13-18 months. Insilico Medicine brought an AI-discovered drug for idiopathic pulmonary fibrosis from target identification to Phase II trials in under 30 months, traditionally a six to eight years process.
By January 2024, at least 75 drugs or vaccines from AI-first biotechs had entered clinical trials, per Boston Consulting Group.
These are real accomplishments but fall short of the finish line. By December 2025, no AI-discovered drug had received FDA approval. The pharmaceutical industry’s 90% clinical failure rate hasn’t improved.
Scientific commentary indicates AI-discovered compounds have progression rates akin to traditional ones, implying faster start but unchanged odds of success.
Dr. Raminderpal Singh, writing in Drug Target Review in February 2026, argues the critical question is not if AI can speed preclinical timelines (it can), but if it can enhance clinical success rates.
Until Phase III data and regulatory approvals answer this, the cautious AI investment approach is “entirely justified.” An unnamed CEO bluntly noted, “AI has really let us all down in the last decade when it comes to drug discovery. We’ve just seen failure after failure.”
No computation has cured Alzheimer’s, pancreatic cancer, or ALS. The issue isn’t processing power but human biology’s complexity. Diseases with poorly understood mechanisms remain opaque despite faster compound screening.
The issue was never the speed of molecular screening but our lack of understanding of cellular processes, animal models’ predictive failings, and the lengthy clinical trials needed to ensure safety and efficacy.
AI cannot bypass biology, shorten a five-year trial to five months, or make the immune system act like a model. Novartis stated plainly at the World Economic Forum in January 2026 that human biology is complex, research translation takes time, and rigorous trials remain essential for many diseases. AI is a tool for intelligent complexity navigation, not a magic solution.
This claim is valid yet distinct from Sam Altman’s notion of simply asking ChatGPT to cure cancer one day.
AI’s drug discovery progress is real but overhyped, and its role as a health assistant is a cautionary tale.
In January 2026, ECRI ranked AI chatbots in healthcare as the top health technology hazard. These tools, not regulated as medical devices, validated for clinical use, are increasingly relied upon by patients and healthcare staff.
ECRI documented incidents where chatbots suggested wrong diagnoses, unnecessary tests, substandard supplies, and, in one case, invented a body part. Over 40 million people daily seek health information from ChatGPT, with 25% of its 800 million users asking healthcare questions weekly.
A February 2026 University of Oxford study with 1,298 participants revealed sobering results. While LLMs alone identified conditions in 94.9% of cases, real people using them identified relevant conditions in just 34.5% of cases and chose the correct actions in 44.2%, comparable to a control group using traditional resources.
Study lead Dr. Rebecca Payne stated, “Despite all the hype, AI just isn’t ready to take on the role of the physician.” Medicine isn’t about knowledge retrieval but conversation: doctors probe, clarify, and guide, finding relevant information in patients who often aren’t aware of its relevance. Chatbots, responding to user input, don’t do this, leading to a breakdown in communication and outcomes.
In mental health, the situation is arguably worse. The American Psychological Association issued an advisory that AI chatbots weren’t created for delivering mental health care, yet they’re being used for that purpose. Stanford research found therapy chatbots showed stigma towards conditions like alcohol dependence and schizophrenia, with no evidence of improvement from more data.
AI isn’t
