40 Million People Are Using ChatGPT as a Doctor. That’s Scary.

40 Million People Are Using ChatGPT as a Doctor. That's Scary. - Professional coverage

According to ZDNet, a new OpenAI report shared exclusively with Axios reveals that more than 40 million people worldwide now rely on ChatGPT for daily medical advice. The report, based on anonymized data and a user survey, found that over 5% of all global messages to ChatGPT are healthcare-related, covering everything from symptom checks to insurance appeal letters. As of July 2025, ChatGPT was processing about 2.5 billion prompts daily, which translates to at least 125 million health questions every single day. This massive usage is happening as over 20 million Americans face a sudden 114% average increase in their Affordable Care Act premiums due to expiring subsidies. The trend underscores a huge, risky shift towards using generative AI as a primary healthcare resource.

Special Offer Banner

The AI Doctor Is Always In

Here’s the thing: the appeal is obvious. The report notes about 70% of these health chats happen outside normal clinic hours. AI doesn’t sleep, it doesn’t have a co-pay, and it’s infinitely patient. People are using it not just for WebMD-style symptom lookups, but for navigating the Byzantine nightmare of medical billing and insurance denials. There are even stories, like this one in the New York Post, of patients using AI to fight absurd hospital charges. When the human healthcare system is expensive, intimidating, and inaccessible, a free, always-available chatbot starts to look like a lifeline. Especially for the young and cash-strapped who might be dropping coverage due to soaring costs.

The Hallucination Problem Is Real

But there’s a massive, dangerous catch. AI is notoriously prone to “hallucination”—making up convincing-sounding nonsense. And in medicine, nonsense can kill. A study posted on arXiv in July 2025 found that leading chatbots, including OpenAI’s own GPT-4o and Meta’s Llama, responded to medical questions with dangerously inaccurate information 13% of the time. Think about that scale: 13% of those 125 million daily queries is over 16 million potentially harmful answers every day. The study authors warned that “millions of patients could be receiving unsafe medical advice.” This isn’t a hypothetical future risk; it’s happening right now. OpenAI says it’s working on safety improvements, but that’s cold comfort if you’re one of the people getting bad advice today.

More Than Search on Steroids

This report really drives home that generative AI is becoming something far deeper than a fancy search engine. Remember, a Harvard Business Review analysis last spring found psychological therapy was the top use. Now we have 40 million people using it as a medical confidant. We’re outsourcing our most vulnerable, personal, and high-stakes conversations to statistical models. The AI isn’t just giving us information; it’s providing counsel, reassurance, and a path forward in moments of stress and fear. That’s a profound level of trust to place in a tool that is, at its core, a brilliant pattern-matching machine with no understanding, empathy, or real-world accountability.

Treat It Like WebMD on Mushrooms

So what’s the takeaway? For now, treat ChatGPT and its ilk like you might treat WebMD, but with an even bigger grain of salt. It can be useful for brainstorming questions to ask your real doctor, or for decoding insurance jargon. But for a diagnosis? Or treatment advice for a serious condition? Absolutely not. It’s a starting point, not a destination. The Axios report frames it well: it’s not a substitute for flesh-and-blood experts. We’re in a bizarre transition where AI is both incredibly useful and dangerously unreliable. And when it comes to your health, betting on the wrong answer isn’t just an academic error. It’s a gamble with your life.

Leave a Reply

Your email address will not be published. Required fields are marked *