Seeking Self-Care Information in the Age of ChatGPT
With a growing shortage of clinicians and an aging population, more patients are seeking self-care information. But navigating the available AI tools remains a challenge even for the most internet savvy.

By John Halamka, M.D., Diercks President, Mayo Clinic Platform and Paul Cerrato, MA, senior research analyst and communications specialist, Mayo Clinic Platform
Patients seeking self-care information increasingly use AI-powered chatbots, including Gemini, ChatGPT, and Claude. Even a casual search on Google will give you the option of using a AI tool similar to using a chatbot. The problem for most of these digital tools is the source of data that they draw from, which is usually the entire internet, which itself contains a mix of accurate and inaccurate information. Telling the difference between the two remains a challenge.
Regardless of how you search for possible conditions, it’s important to use the same critical thinking skills and hypothesis testing approach researchers use when they investigate disease. Clinicians will typically search the professional literature by using resources like PubMed, UptoDate, and ClinicalKey to find the latest, most reliable research and professional recommendations on diagnosis and treatment. And they rarely fall victim to the latest conspiracy theories in the popular press and social media. They may also take advantage of large language models but shy away from reliance on general chatbots like ChatGPT, Claude, or Gemini. They are more likely to use LLMs like OpenEvidence and Consensus, which draw content from peer-reviewed medical journals.
Clinicians have also been trained to be scientific skeptics, unwilling to accept theories about what causes a patient’s condition just because a few studies have found an association between a disease and a specific lifestyle practice or environmental exposure. One of the cardinal principles we live by is: correlation does not equal causation. A cause/effect relationship is usually established by a clinical trial that compares a treatment option to a control group that didn’t receive the treatment.
Regardless of the chatbot a patient chooses, it’s always best to double check the answer with other sources. As experienced users have learned, ChatGPT and other general purpose large language models have been known to invent “facts” and even cite non-existent references. It’s also wise to ask follow-up questions that force the chatbot to provide convincing evidence to show it is not making up content, while also seeking content from more trustworthy sources. In a previous column, we also included several reliable sources that patients can turn to, including the American Heart Association, Mayo Clinic, and the National Patient Advocate Foundation.
AI is math not magic. It can augment our work and our decision-making, but we must understand its limitations. Wise use of these tools can educate us and help us on our care journeys, but we must not blindly accept every recommendation.
Recent Posts

By John Halamka and Paul Cerrato — It may sound counterintuitive, but the evidence indicates slowing down can help.

By John Halamka and Paul Cerrato— AI-based algorithms have a place in the emergency department, but clinicians still need to separate signal from noise.

By John Halamka and Paul Cerrato — Many hospitals are purchasing the latest AI algorithms to streamline operations and improve clinical care. But few of these models are being fully vetted for accuracy, bias, and transparency.