Read This First Before Using ChatGPT Health

Read This First Before Using ChatGPT Health

2 Min Read

OpenAI’s health chatbot is setting the stage for potential AI legal challenges.

OpenAI’s chatbot, ChatGPT, has become a prominent source for health advice. According to OpenAI, 40 million people use ChatGPT daily to navigate medical bills, appeal insurance claims, or manage treatments. A February Gallup poll found that nearly 16% of U.S. adults already seek medical info from AI or social media.

With $220 billion in medical debt in the U.S. and healthcare workforce shortages, AI could be a solution. OpenAI introduced ChatGPT Health in January, offering free health advice based on personal medical records, posing privacy concerns. Amazon and Microsoft have followed with similar AI health tools, which aren’t always HIPAA compliant.

Some experts warn of privacy risks. Melodi Dinçer from the Tech Justice Law Project criticizes OpenAI’s strategy as data exploitation. Andrew Crawford from the Center for Democracy and Technology points out the lack of comprehensive federal privacy laws in the U.S.

AI in healthcare could revolutionize tasks like imaging and insurance processing, but consumer-facing products like ChatGPT Health may lack the necessary regulations. HIPAA’s protections apply only to certain entities, leaving personal data exposed outside of those protections.

Most health apps, like Oura and Apple Health, aren’t covered by HIPAA, allowing companies to use collected data at will. Consumers must navigate a confusing landscape of privacy policies and opt-out systems.

The FDA has chosen to limit regulation on wearable tech, categorizing them as low-risk as long as no diagnoses or treatment claims are made.

As AI grows in the healthcare space, experts caution against risks like bias and misinformation. New health tech, even if promising, has significant legal and privacy implications yet to be fully addressed.

You might also like