AI Doctors: Are ChatGPT & Health Chatbots Safe? (Controversial Health Tech) (2026)

Your doctor is already using a controversial new tool in healthcare, and you might be next! It seems like just yesterday that asking a chatbot for medical advice felt like a wild guess. In fact, a study from only two years ago revealed that ChatGPT could correctly diagnose just 2 out of 10 pediatric cases. And let's not forget some of the early, bizarre recommendations from AI, like Google Gemini suggesting people eat a small rock daily or use glue to make cheese stick to pizza! One nutritionist even ended up hospitalized after following ChatGPT's advice to replace salt with sodium bromide.

But here's where it gets interesting: AI companies are now rolling out health-specific chatbots designed for both us, the consumers, and for healthcare professionals. OpenAI has launched ChatGPT Health, allowing you to connect your personal medical records for (theoretically) more accurate health advice. They've also introduced ChatGPT for Healthcare, which hospitals are already implementing. Not to be outdone, Anthropic has developed Claude for Healthcare, aimed at assisting doctors with tasks like retrieving patient records and improving patient-provider communication.

So, what makes these specialized chatbots different from the general ones? According to Torrey Creed, an associate professor at the University of Pennsylvania researching AI in psychiatry, the key lies in their training data. Health-specific chatbots are trained on healthcare data, meaning they shouldn't be pulling information from unreliable sources like social media. Another crucial difference is ensuring your private data isn't sold or used for training. Chatbots in the healthcare sector are designed to be HIPAA compliant, and those prompting consumers about symptoms are meant to simply connect the dots, with robust privacy settings to protect your information.

Raina Merchant, executive director of the Center for Health Care Transformation and Innovation at UPenn, shares her insights on this evolving AI medical landscape and how doctors are already leveraging this technology. She emphasizes that while AI holds immense potential, it's essential to use it with caution for now.

How is the healthcare system currently embracing these AI chatbots?

"It's a really exciting area," Merchant explains. At Penn, they have a program called Chart Hero, which is like a ChatGPT embedded directly into a patient's health record. "It's an AI agent I can prompt with specific questions to help find information in a chart or make calculations for risk scores or guidance. Since it's all embedded, I don’t have to go look at separate sources." This innovation allows doctors to dedicate more time to patient interaction and human connection, as they spend less time sifting through charts or synthesizing information from various places. "It’s been a real game changer."

There's also significant development in ambient AI, where with patient consent, AI can listen in and help generate clinical notes. Additionally, AI is being integrated into messaging systems. For instance, a patient portal uses AI to help identify ways to accurately answer patient questions, always with a human in the loop.

What does having a human in the loop actually mean?

Many hospital-based chatbots are intentionally supervised by humans. What might appear fully automated often has people working behind the scenes, ensuring there are checks and balances in place.

So, a tool like ChatGPT Health, which is purely consumer-facing, wouldn't have a human in the loop. You could be on your couch, asking AI your health questions. What would you recommend patients use ChatGPT Health for, and what are its limitations?

Merchant views AI chatbots as tools, not clinicians. Their primary aim is to make healthcare more accessible and navigable. "They are good at guidance, but not so much judgment." While they can help you understand next steps, she advises against using them for making medical decisions.

She finds great value in using AI to prepare questions for your doctor. "Going to a medical appointment, people can have certain emotions. Feeling like you’re going in more prepared, that you thought of all the questions, can be good."

For example, if I have a low-grade fever, is it a good idea to ask ChatGPT Health what to do?

"If you are at the point of making a decision, that’s when I would engage a physician," Merchant states. She sees significant value in using the chatbot as a tool for understanding next steps, but not for making the ultimate decision.

How reliable are these new health chatbots when it comes to diagnosing conditions?

These chatbots possess a vast amount of information that can be beneficial for both patients and clinicians. However, a critical unknown is when they might hallucinate (generate false information) or deviate from established guidelines and recommendations. "It won’t be clear when the bot is making something up."

Merchant offers a couple of key pieces of advice for patients: check for consistency, validate information with trusted sources, and trust your instincts. If something sounds too good to be true, it's wise to approach any decisions based on that information with a healthy dose of hesitancy.

What sources should patients rely on to verify AI-generated information?

Merchant personally trusts well-known sources like information from the American Heart Association or other major medical associations that provide guidelines and recommendations. When it comes to trusting a chatbot, she suggests that's precisely when it becomes valuable to collaborate with your healthcare professional.

Is the data patients input into health chatbots secure?

Merchant's strong recommendation for any patient is to avoid sharing personal details such as your name, address, medical record number, or prescription IDs. "Because it’s not the environment we use for protecting patient information—in the same way that I wouldn’t enter my Social Security number into a random website or Google interface."

Does this advice extend to health chatbots provided by hospitals or health centers?

"If a hospital is providing a chatbot and [is very clear and transparent] about how the information is being used, and health information is protected, then I would feel comfortable entering my information there," Merchant clarifies. However, for any service lacking transparency about data ownership and usage, she would not share personal details.

And this is where it gets really interesting: While AI promises to revolutionize healthcare, the question remains: are we ready to fully embrace it? Do you trust AI with your health information? Let us know your thoughts in the comments below – we'd love to hear your agreement or disagreement!

AI Doctors: Are ChatGPT & Health Chatbots Safe? (Controversial Health Tech) (2026)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Jamar Nader

Last Updated:

Views: 6047

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.