AI-Enhanced Cardiology Takes Another Step Forward

Combining a convolutional neural network with routine ECGs detected low ejection fraction, a signpost for Asymptomatic left ventricular systolic dysfunction

John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform, wrote this article.

Asymptomatic left ventricular systolic dysfunction (ALVSD) may not be the most familiar disorder in medicine, but it nonetheless increases a patient’s risk of heart failure and death. Unfortunately, ALVSD is not that easily detected. Characterized by low ejection fraction (EF) — a measure of how much blood the heart pumps out during each contraction — it’s readily diagnosed with an echocardiogram. But because the procedure is expensive, it’s not recommended as routine screening for the general public. A recently developed AI-enhanced algorithm that’s used in conjunction with an ECG can identify low EF, one of many advances that will eventually make machine learning an essential part of every clinician’s “tool kit.”

The new algorithm, a joint effort between several of Mayo Clinic’s clinical departments and Mayo Clinic Platform, was published online by Nature Medicine. The EAGLE trial included over 22,000 patients, divided into intervention and control groups and managed by 358 clinicians from 45 clinics and hospitals. The algorithm/ECG was used to evaluate patients in both groups but only those clinicians allocated to the intervention arm had access to the AI results when deciding whether or not to order an echocardiogram. In the final analysis, 49.6% of patients whose physicians had access to the AI data underwent echocardiography, compared to only 38.1% (Odds ratio 1.63, P< 0.001). Xiaoxi Yao, with the Kern Center for the Science of Health Care Delivery, Mayo Clinic, and associates reported that “the intervention increased the diagnosis of low EF in the overall cohort (1.6% in the control arm versus 2.1% in the intervention arm) and among those who were identified as having a high likelihood of low EF.” Using the AI tool enabled primary care physicians to increase the diagnosis of low EF overall by 32% when compared to the diagnosis rate among patients who received usual care. In absolute terms, for every 1,000 patients screened, the AI system generated five new diagnoses of low EF compared to usual care.

Earlier research on the neural network used to create the AI tool had shown that it’s supported by strong evidence. A growing number of thought leaders in medicine have criticized the rush to generate AI-based algorithms because many lack a solid scientific foundation required to justify their use in direct patient care. Among the criticisms being leveled at AI developers are concerns about algorithms derived from a dataset that is not validated with a second, external dataset, overreliance on retrospective analysis, lack of generalizability, and various types of bias, issues that we discuss in The Digital Reconstruction of Healthcare. The EAGLE trial investigators addressed many of these concerns by testing its algorithm on more than one patient cohort. An earlier study used the tool on over 44,000 Mayo Clinic patients to train the convolutional neural network and then tested it again on an independent group of nearly 53,000 patients. And while this study was retrospective in design, other studies have confirmed the algorithm’s value in clinical practice by using a prospective design. The most recent study, cited at the beginning of our blog, was not only prospective in nature, it was also pragmatic, which reflects the real world in which clinicians practice. Traditional randomized controlled trials consume a lot of resources, take a long time to conduct, and usually include a long list of inclusion and exclusion criteria for patients to meet. The EAGLE trial, on the other hand, was performed among patients in everyday practice.


Recent Posts