Finding a Place for AI in the ED
AI-based algorithms have a place in the emergency department, but clinicians still need to separate signal from noise.

By John Halamka, M.D., Diercks President, Mayo Clinic Platform and Paul Cerrato, MA, senior research analyst and communications specialist, Mayo Clinic Platform
To say that the emergency department (ED) is often overcrowded and understaffed states the obvious. And for many older patients, the long waits can be especially harmful. Recent statistics tell a troubling story: Health records from over 1600 hospitals and nearly 300 million patients age 65 or older indicate that one in five spend more than 8 hours in the ED. That kind of long-term exposure to the immobility, sleep disruptions, and environmental changes is enough to cause serious psychological damage to anyone, but is especially disruptive to the elderly.
Researchers have explored several ways in which AI might help alleviate the burden patients and clinicians must face in the ED. It can impact triaging, enabling clinicians to more quickly determine which patients need the fastest attention. There is also evidence to suggest that AI-driven algorithms may improve the clinical decision process, helping practitioners arrive at a more accurate diagnosis. Others have found these tools can reduce wait times in the ED or “optimize resource allocation” as one review paper explains.
Mayo Clinic investigators who analyzed 100,000s of patient records found that an AI system—XGBoost—can automate medical coding for ED visits. They reviewed these encounters, looking at Current Procedural Terminology (CPT) codes to rate them from minimal to high complexity (levels 2-5), along with the relevant evaluation and management codes 99281 to 99285. Morey at al concluded: “Model performance for professional billing code levels of 4 and 5 yielded area under the receiver operating characteristic curve values of 0.94 and 0.95, accuracy values of 0.80 and 0.92…” when compared ground truth, namely coding by humans. Their findings further indicated that, with 99% precision, the model was able to identify 57% of charts documenting complex care that were suitable for automated coding without human involvement.
Other studies have found that AI models may improve triage in the ED. One group, for instance, found that AI algorithms were better able to detect sepsis when compared to the Emergency Severity Index (ESI) traditionally used in this setting. In fact, Marsilio et al found that the model sped up the time to intervention and was better at identifying high risk patients. A separate analysis concluded: “implementing NLP within AI-driven triage systems significantly enhanced clinicians’ ability to detect early signs of respiratory complications, underscoring NLP’s role in managing unstructured clinical data.”
During a recent conversation with Richard Winters, MD, a medical director for Mayo Clinic Platform and an emergency medicine physician at Mayo Clinic Hospital in Rochester, Minnesota, he confirmed that several AI systems can help supplement triage and clinical decision making in the ED. But he also pointed out a problem that many AI enthusiasts overlook: the signal-to-noise ratio.
In the ED, there are numerous interruptions and the need to juggle many responsibilities simultaneously. “I can usually separate the signal from the noise as I receive information from nurses, PAs, pharmacists, the EHR, patients, and their families, but I sometimes worry that I may have anchored too soon, and chosen the wrong signal. Do we need to pivot? For example, a patient came into the ED with a nosebleed, we stopped the nosebleed and they were ready to go, but if we hadn’t noticed their slightly wobbling gait and recognized there may be something else going on, the patient could have been discharged and the brain tumor, that led to vomiting, which led to having a bloody nose, would have been missed. If we anchored too early in our diagnosis, we may have missed key signals in the noise.
The care we provide is inherently multimodal. We attend not only to what is spoken but also to what is left unsaid; we notice the timbre and tone, and we sense both movement and stillness. During my shifts, I use OpenEvidence to challenge my thinking, but I recognize that its conclusions are limited to the information I provide—the signal I extract from the noise. And while a model may score 100% on a physician licensing exam, patients do not present as multiple-choice questions. I’m also mindful that the ambient listening technologies I rely on produce transcriptions that omit crucial multimodal details: the breathlessness in a voice, the tenor of anxiety embedded in what is spoken. In the ED, as well as in any other clinical setting, it’s important to always ask ourselves: Am I recognizing the signals that matter the most in the noise?”
It's very clear that the next step for AI in healthcare is adoption. That requires bringing the right solutions for the patient to the right clinician at the right time in the right setting. The Emergency Department, with all its complexity, is a great learning laboratory to better understand what works.
Recent Posts

By John Halamka and Paul Cerrato — It may sound counterintuitive, but the evidence indicates slowing down can help.

By John Halamka and Paul Cerrato — Many hospitals are purchasing the latest AI algorithms to streamline operations and improve clinical care. But few of these models are being fully vetted for accuracy, bias, and transparency.

By John Halamka and Paul Cerrato — Healthcare professionals rely heavily on their clinical experience, supplemented by what they read in the medical literature. Several developers are trying to add AI algorithms into the mix.