How (and How Not) to Use ChatGPT for Health Advice

Luke Whelan Fact Checked
Woman laying on a couch looking at phone
© Artem Zhushman / Stocksy United

Move over Dr. Google, Dr. ChatGPT is the new health “expert” on the internet, answering your medical questions 24/7. According to a recent survey, 1 in 6 adults in the U.S. use AI chatbots, like ChatGPT, for health information at least once a month — 1 in 4 people under 30 do.

Given how hard it can be to get a primary care appointment these days, it’s understandable that people are turning to something as accessible as an AI chatbot to look up a symptom they’re anxious about, figure out how to treat an injury or even interpret lab results.

But is that a good thing? It turns out that while AI chatbots are very helpful in some ways, they are very flawed in others. Here’s what you need to know.

Where do AI chatbots get their health information?

Before using an AI chatbot to diagnose a mysterious symptom or interpret a lab result, it’s important to understand where its advice comes from and how it works.

AI chatbots like ChatGPT, Gemini and Claude are large language models, a type of generative artificial intelligence trained on giant datasets of text – think huge swathes of the internet – to autonomously perform tasks and answer questions with human language.

There are some big issues with how they’re trained, though, especially when it comes to health information:  

We don’t know what data it’s being trained on

We know AI chatbots are being fed all sorts of webpages, books and video transcripts, but we don’t know how reliable those sources are, whether they contain biases or even if they were obtained ethically.  

“These AI Chatbots are only as good as the information that trains them, and there is not a lot of transparency about their training data,” says Angad Singh, MD, an urgent care physician and associate chief clinical information officer for UW Medicine. 

That means an AI chatbot’s suggested treatment for your insomnia might include both evidence-based health libraries and Reddit threads from people without any medical training.  

We also don’t know whether AI chatbots have internalized biases in the data and biomedical literature they’re trained on.  

“AI systems trained only on data from specific races or genders may misdiagnose conditions or recommend treatments that are not appropriate for patients outside those demographics,” says Majid Chalian, MD, a musculoskeletal radiologist and an associate professor of radiology at UW School of Medicine.  

They are trained only on publicly available information  

While we don’t know exactly what data the major AI chatbots are trained on, they likely rely heavily on publicly available information on the Internet. This brings up another issue — some important medical information is not accessible to the public, or is fragmented in different locations, which means it’s likely AI chatbots are not being fed complete data.  

That’s not to mention your own medical information, which is often critical in determining a correct diagnosis or treatment. You uploaded a lab result to an AI chatbot, but without your last 10 values, it might not be able to determine whether it’s good or bad or in between.  

“If you were looking at a thyroid level, it could be fine, it could be not fine,” says Singh. “But it might matter what your previous history was before you want to make a conclusion about whether it’s trending in the right direction or whether it’s trending in the suboptimal direction.”

AI chatbots don’t memorize information  

Let’s say that your AI chatbot was only trained on unbiased, evidence-based sources. Unlike a medical student, AI chatbots don’t memorize information verbatim. Instead, they compress huge amounts of data and then produce answers that seem plausible, without explicitly representing the original sources to convey their findings accurately.  

And without additional tools, such as integrated web search, it’s also difficult to find the original information to fact-check what the AI chatbot produced.  

“You don’t know what source it’s getting its answer from, and you don’t know whether it’s even accurately representing that source; that’s unknowable,“ says Trevor Cohen, PhD, a professor in the UW School of Medicine Department of Biomedical Informatics and Medical Information, who researches biomedical applications of large language models.

In other words, assuming anything an AI chatbot tells you is truthful without fact-checking it is inherently risky.  

The risks of using AI for health advice

For the above reasons, there are some serious risks to keep in mind before you ask an AI chatbot for medical advice. Here are a few of the big ones:  

Inaccuracy with an air of authority

As mentioned above, AI chatbots’ answers can be inaccurate. In one recent case study written by UW Medicine researchers, a 60-year-old man started suffering from psychosis because he started substituting bromide for table salt after consulting with an AI chatbot about how to eliminate sodium chloride from his diet. 

Of course, Google can also lead people to false or misleading information, but the difference is that an AI chatbot conveys a sense of authority or even omniscience that can make it less obvious how prone it is to errors. There’s even a word for when AI chatbots produce false or even nonsensical responses but express them authoritatively as the truth: hallucinating.  

“What the models are really good at is generating probable completions that look plausible, and it’s because they look plausible that they’re dangerous,” says Cohen. “If they were clearly nonsense when the model goes wrong, that would be much easier.”

Sycophancy  

Not only can AI chatbots be persuasive, but they can also be empathetic or even sycophantic, telling you what you want to hear. This can be risky when it comes to health advice.  

“If you’re just agreeing with someone entirely, it is not the best medicine, you’re just reinforcing beliefs,” says Cohen.

For example, if you’re someone with health anxiety, an AI chatbot could convince you that the cause of your symptoms is a scary but unlikely possibility, or that a test or treatment is necessary that might not actually be appropriate. On the other hand, if you’re a minimizer when it comes to health issues, there is a risk of it confirming your bias that nothing is wrong and you don’t need to go to the doctor.

In more extreme cases, AI chatbots have encouraged vulnerable people to stop taking medications for mental health issues or supported their delusional thinking, worsening their psychosis.  

Lack of judgment

Diagnosing and treating a condition or illness requires a great deal of judgment, not to mention consideration of a patient’s medical history and many other variables. AI chatbots are not good at this, and trusting one to make health decisions is not a wise idea and definitely not a replacement for seeing a human doctor.  

For example, if you ask an AI chatbot why you have jaw pain, it might assume you’re experiencing dental problems.  

“I’m an urgent care doctor, so I’m of course going to think about dental problems, but you can bet that I would also want to make sure you’re not having a heart attack, which is an atypical symptom for heart attacks that I’m not convinced every chatbot would mention as a possible explanation,” says Singh.

An AI chatbot could also suggest a treatment without knowing you’re on a medication that makes it dangerous to try, or offer a diagnosis that doesn’t factor in a hereditary risk that could make a rarer possibility more likely.  

“A lot of us are worried about AI chatbots when it comes to diagnosis and treatment; the stakes start to get higher,” says John D. Scott, MD, a physician at the Hepatitis and Liver Clinic at Harborview and chief digital health officer for UW Medicine. “If something doesn’t feel right, trust your gut, and check with your doctor if you have any concerns.

Privacy  

Perhaps one of the most important considerations when it comes to health information is privacy.  

“You have to assume that any data you put in is going to be ingested by the AI chatbot to use for future interactions,” says Singh. “If you put a picture of your report and it’s got your birthday and your name, the moment you do that, you lose control of what happens to that piece of data.”

Your healthcare provider, on the other hand, is taking all sorts of precautions to safeguard your medical records in their system.  

What are AI chatbots good at?

While it’s important to be cautious, AI chatbots can be helpful tools for some things. Here are some ways you can use them to be informed about your health while avoiding the above risks.  

Use it as a translator  

AI chatbots can be very useful when you’re trying to understand something dense or complicated, like a study or medical document.  

“They’re really good at taking something written in complex jargon or medical language and making it easier to understand, whatever your point of entry happens to be,” says Cohen.

So if you’re, say, reading a study about a new immunotherapy for a type of cancer, having an AI chatbot help summarize it in simpler terms could be useful. So could having it simplify a confusing medical bill (though be sure to take out any private or personal information before uploading something like this).

It can even help people translate documents into another language if English isn’t their first language.  

Use it as a brainstorming partner  

Again, AI chatbots are not meant to replace doctors, but can help you get the most out of your time at the doctor’s office. For example, you can use it to come up with questions to ask at your appointment.  

“For pregnant patients, for example, they may only have a big picture idea of what a birth plan should include,” says Singh. “An AI chatbot could easily create a bulleted checklist of birth plan elements and suggest questions to ask your care team when discussing your birth plan.”

The key is to use it as a starting point or a tool to help you get more context about whatever health issue you’re researching.

“Encourage the model to check its sources and direct you toward reliable sources of health information,” says Cohen. “Use it more as an interpreter than an oracle.”  

You can also ask it to use reliable sources — evidence-based health and wellness websites like Right as Rain, for example, as well as medical journals and .org and .edu websites — to give you information on a topic.

You can then bring what you’ve learned to the doctor’s visit to have a more productive conversation with them.

Use it for reassurance

Dr. Google is famous for scaring people by making them believe the symptoms they are experiencing are the worst-case scenario. Of course, AI chatbots can be even more convincing if used the wrong way.  

But you can also prompt them to instead give you the most likely cause of your symptoms or other possibilities for what it might be, besides what you’re worried about. Again, encourage it to deliver the information as unemotionally as possible and to use evidence-based sources.  

The components of a good prompt 

If you’re ready to try an AI chatbot for one of the above purposes, the next step is to come up with a prompt that will get you the most helpful and accurate information possible. Here are some components to consider including, though for the reasons explained above, using them will not eliminate the risk of an AI chatbot giving you inaccurate or even dangerous information. Always consult with your doctor before making any decision about your health.

Give your AI chatbot a clear role  

Start by giving the AI chatbot a role, for example, “act as if you are a medical information assistant” or “you are a health journalist.” You can also tell it that, in this role, its goal is to explain things in clear, plain language and to only use current, evidence-based sources. You can even specify that their role is not to give you medical advice, diagnoses or treatment ideas, but to help you understand an issue that you’d like to bring up with your doctor or another medical professional.  

Ask a specific question

The more detailed and specific your question, the better. Instead of simply asking what’s causing your headache, you could say something like: “I’ve had a nagging headache for a few days that won’t go away. It feels like a moderate ache behind my eyes and it is worse in the afternoon. Can you tell me what the most likely causes of this headache might be?”

Give it some context  

While you should be very careful about sharing your name and putting private information into an AI chatbot (again, it will no longer be private once it’s shared there), the more information you include about who you are as a patient, the better the results are likely to be. You could consider including things like:  

  • Your age and sex
  • Your general location (e.g., Seattle, Washington)
  • Any relevant health history that you’re comfortable sharing
  • Any current medications or allergies that you’re comfortable sharing  
  • Whether you are pregnant
  • The duration and severity of the symptoms
  • Any recent test results that might be relevant and that you’re comfortable sharing.

Require it to use credible sources

At this point, you can specify what sources you want it to use, which could include trusted organizations like the World Health Organization, the American Heart Association, or even UW Medicine. You can also ask it to annotate its sources for each fact, and include the name of the source, its publication date and a link. Finally, you can ask it to note areas where there is uncertainty or disagreement among experts or if there is something the AI chatbot can’t determine without a physical exam or test. You can even tell it to include a disclaimer that the information it provides is not professional medical advice and shouldn’t replace consulting with your doctor.  

Tell it how to structure its answer:

Finally, give the AI chatbot some instructions on how to present its answer. This might include:  

  • A one-paragraph, plain-language summary  
  • A list of the major sources it used, with publication dates and links
  • A detailed explanation of the health issue and it’s most likely causes
  • A list of symptoms that would require immediate medical attention
  • A list of questions to ask your doctor about the health issue  

Ask it follow-up questions

Once you’ve read the AI chatbot’s response, ask it follow-up questions for additional detail, clarification or more sources if anything is missing or unclear. You can even ask it to review and optimize your original prompt or explain what other information it needs to provide a more thorough and accurate answer. If you see a mistake, point it out and ask the AI chatbot to correct it.  

Where do we go from here?

AI is moving so fast that today’s chatbots might be replaced by even more powerful AI sooner than we think. The wave coming after generative AI is called “agentic AI,” built around agents that could, for example, schedule your next wellness exam and then regularly remind you to take your blood pressure at home afterward if you’d discussed having high blood pressure during your visit.  

“The applications that I’m seeing show up in healthcare are really for navigation of the healthcare system, like booking an appointment,” says Scott. “The AI can ask the patient questions and lead them through it, and can be very smart, flexible and empathetic to the emotions that people sometimes come in with.”  

There is undeniable potential in using AI as a tool to improve healthcare and to become more knowledgeable and empowered about your own health. The key is to use it as a tool and to not hand over your health decisions to an AI chatbot – that’s what your doctor is for.