If Only Thinking Made It So

Cognitive errors challenge clinicians and technologists alike. Being aware of the types of mistakes that can occur is the first step toward fixing them.

By John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform

“One of the great challenges in life is knowing enough to think you’re right, but not enough to know you’re wrong.” When the astronomer Neil deGrasse Tyson penned those words, he probably wasn’t thinking about the cognitive errors that sometimes affect health care workers. But they certainly apply: Imagine working in hospital emergency seeing several patients walk in disheveled, with slurred speech and difficulty walking a straight line. Alcohol intoxication is one of the first things that would come to mind, which is often confirmed by a blood test. So naturally, when your next patient presents with slurred speech and rumbled clothing, your mind immediately turns to your last intoxicated case. And if you’re already overwhelmed with a room full of patients with more urgent needs, it’s easy to put this patient into the same “bucket” when in fact a more thorough evaluation might reveal a stroke, which can also present with slurred speech and altered gait.

That common mistake, called availability bias, is one of many cognitive errors that occasionally we all fall victim to. Nicola Cooper, a professor at University of Nottingham School of Medicine, explains: “Availability bias is when things are at the forefront of your mind because you have seen several cases recently or have been studying the condition in particular.” Unfortunately, it is only one of several reasoning mistakes clinicians and technologists are prone to. Cooper also includes overconfidence bias in her list of cognitive errors, which brings to mind Tyson’ comment about knowing enough to think you’re right, but not enough to know you’re wrong. It emphasizes the need to avoid relying too heavily on our past experience and opinions but instead concentrate on gathering all the available evidence before reaching a tentative diagnosis. Collecting all the relevant evidence can be time consuming, which is one reason clinicians rely on heuristics, disease scripts, and a variety of shortcuts to reach a conclusion when pressed for time.

During the diagnostic process, clinicians are inclined to fast Type 1 thinking. As we explain in Reinventing Clinical Decision Support,Type 1 thinking is used by most experienced clinicians because it’s an essential part of the pattern recognition process. This intuitive mode employs heuristics and inductive shortcuts to help them arrive at quick conclusions about what’s causing a patient’s collection of signs and symptoms. It serves them very well when the pattern is consistent with a common disease entity. Recognizing the typical signs and symptoms of an acute myocardial infarction, for example, allows clinicians to quickly take action to address the underlying pathology. There are hundreds of such disease scripts that physicians and nurses have committed to memory and that immediately come to mind in a busy clinical setting.

Of course, this intuitive approach can be affected by a clinician’s impressions of a patient’s demeanor, how the patient appeared in the past, the clinician’s biases toward “troublesome” patient types, as well as distractions in the work environment. Pat Croskerry, M.D., Ph.D., professor, Department of Emergency Medicine, Faculty of Medicine and Division of Medical Education, Dalhousie University, Halifax, Nova Scotia, Canada, points out: “The system is fast, frugal, requires little effort, and frequently gets the right answer. But occasionally it fails, sometimes catastrophically. Predictably, it misses the patient who presents atypically, or when the pattern is mistaken for something else.”

Type 2 reasoning, on the other hand, is particularly effective in scenarios in which the patient’s presentation follows no obvious disease script, when patients present with an atypical pattern, and when there is no unique pathognomonic signpost to clinch the diagnosis. It usually starts with a hypothesis that is then subjected to analysis with the help of critical thinking, logic, multiple branching, and evidence-based decision trees and rules. This analytic approach also requires an introspective mindset that is sometimes referred to as metacognition, or “ability to step back and reflect on what is going on in a clinical situation.”

This skill set also lets clinicians run through a list of common cognitive errors that can easily send them in the wrong direction. But because Type 2 reasoning is a much slower process, it is often a challenge to implement, especially in high-stress, high-volume settings. For the slow reflective Type 2 mode to be most effective, it requires a well-rested clinician who is not being distracted, does not have an unreasonable heavy workload, and has had adequate sleep to fully use his or her analytical skills and memory. Too few work environments satisfy these prerequisites.

Technologists are likewise prone to cognitive errors and biases. In a recent paper we published in BMJ Health and Care Informatics, for instance, we discuss the problem of algorithmic bias. The bias can occur when training an algorithm on faulty data set, which sometimes discriminate against Black patients, women, and those in lower socioeconomics groups. An algorithm used to help diagnose myocardial infarction that is derived from the experience of male physicians who routinely ignore the atypical signs and symptoms that can develop in female MI patients is not going to be a very useful tool.

Similarly, developers need to be aware of the risk of data shift, another common problem that can distort an algorithm’s usefulness. In plain English, data shift is what happens when the data collected during the development of an algorithm changes over time and is different from the data when the algorithm is eventually implemented. For example, the patient demographics used to create a model may no longer represent the patient population when the algorithm is put into clinical use.

While the list of cognitive errors is long, understanding how they can occur and having the humility to recognize them improves patient care—and separates the novice from the expert.


Recent Posts