Could Your Health AI Chatbot Testify Against You in Court?

Could Your Health AI Chatbot Testify Against You in Court?

3 Min Read

Tech companies such as OpenAI have multiple reasons for securing your interactions with health-focused AI. Last July, OpenAI CEO Sam Altman expressed that it’s flawed that interactions with AI aren’t granted the same legal protections as those with human experts. He posted online, wishing society would soon treat AI conversations like those with lawyers or doctors. Altman continues to advocate for stronger privacy protections for these interactions, while some states challenge AI bots marketed as therapeutic or legal professionals.

Legal experts suggest that user privacy isn’t the only reason for seeking stronger safeguards; there’s also a self-interest angle. Making LLMs inaccessible to courts protects both AI users and companies. Altman’s comments might relate to OpenAI’s legal challenges—courts demanded the company preserve and potentially disclose user chat logs as legal evidence. Shielding AI interactions like those with therapists or lawyers might prevent such disclosures. Promoting a cultural shift that equates AI guidance with human professionals is seen as a potential solution.

Melodi Dinçer from the Tech Justice Law Project explains that “AI privilege” could resemble existing legal privileges that keep communications confidential. However, these usually pertain to human interactions. Legal contexts for AI, like attorney-client privilege, remain ambiguous. Recently, a court ruled AI-generated documents weren’t privileged, partially based on privacy policies of the AI company.

AI developers are keen to keep internal data from legal discovery. While user privacy is crucial, AI privilege in law presents challenges—securing user data without impeding accountability for AI creators is complex.

As AI in health becomes more prominent—with OpenAI launching ChatGPT Health and others following the trend—revenue and a desire to foster trust in AI guide these companies. Despite hopes for AI privacy safeguards, the legal landscape remains uncertain. It’s clear why leaders like Altman aim to protect chatbot conversations from legal scrutiny.

**Health AI: A Growing Sector**

OpenAI and rivals are diving into healthcare, launching chatbots for health management. While some are HIPAA compliant, others aren’t. The financial potential of healthcare AI attracts many companies. In 2025, $1.4 billion went to healthcare-related AI, mostly to startups.

**AI and Legal Privileges**

If AI conversations become privileged, legal complexities could emerge. The lack of regulation is concerning to privacy advocates. Should AI “doctors” or other AI professionals receive similar legal confidentiality as humans? Navigating this evolving landscape is crucial as AI continues to intertwine with user data and healthcare.

Legal experts and companies must balance protecting user data with ensuring accountability, all while advancing AI technology within the regulatory framework.

You might also like