Responsible AI Requires a Delicate Balancing Act

Regulators who put too many restrictions on the technology can slow down innovation, but if they set up guardrails that aren’t strict enough, they needlessly endanger patients’ lives.

 By John Halamka, M.D., President, Mayo Clinic Platform and Paul Cerrato, MA, senior research analyst and communications specialist, Mayo Clinic Platform

Anyone who has ever been to the circus has seen the high-wire act. A brave soul walks across a thin wire, impressing viewers with their daring, concentration, and exceptional fine motor skills. That feat of courage is not all that different from the balancing act that developers, regulators, and providers must achieve to manage healthcare AI.

Innovative, safe AI almost sounds like an oxymoron. Innovation brings to mind risky, unconventional solutions, while safety brings to mind the crossing guard at your local high school, whose job it is to stop all traffic while children cross the street. Responsible AI is probably a better term to describe what stakeholders must aim for; the goal is to “color outside the lines” while still remembering our most important responsibility, namely patients’ well being.

With that in mind, the Office of the National Coordinator for Health Information Technology (ONC),  a branch of the U.S. Department of Health and Human Services, recently issued its final rule to advance health IT interoperability and algorithm transparency. Its goal is to make it “possible for clinical users to access a consistent, baseline set of information about the algorithms they use to support their decision making and to assess such algorithms for fairness, appropriateness, validity, effectiveness, and safety.”  Officially referred to as Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing rule, HTI-1 addresses four critically important areas: algorithmic transparency, USCDI Version 3, enhanced information blocking, and new interoperability-focused reporting metrics for certified health IT.  

For readers not familiar with the Certified Health IT program, a brief history will help put the latest regulation in context. Launched in 2010, the voluntary Certification Program was established by ONC to provide for the certification of health IT and to support its availability under other federal, state, and private programs. For example, the Centers for Medicare & Medicaid Services (CMS) Promoting Interoperability (PI) Programs (previously Medicare and Medicaid EHR Incentive Programs) requires the use of health IT certified under the Certification Program. Throughout the evolution of the Certification Program, ONC has released multiple editions of certification criteria and regulations for new and expanded Certification Program requirements. The new rule is the first one to fully address the impact of AI enabled algorithms on the healthcare industry.

The list of possible problems that have surfaced since AI algorithms became commercially available is long. Some have been derived from data sets that are too small to be representative of the patient population they claim to serve; others have ignored the needs of people of color, women, and those in lower socioeconomic groups, as we outlined in an article published in BMJ Health and Care Informatics. Others have not been fully validated, relying too heavily on retrospective rather than prospective analysis. The new rule requires vendors to be transparent, explaining in adequate detail how their algorithms are fair, valid, and safe. It places special emphasis on what it calls decision support interventions (DSI) and predictive algorithms. 

As the ONC states the issue: “The DSI criterion, as finalized, ensures that Health IT Modules … reflect an array of contemporary functionalities, support data elements important to health equity, and enable the transparent use of predictive models and algorithms to aid decision-making in healthcare.” ONC defines a predictive DSI as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis.”

The new rule has also refined and clarified ONC’s position on data blocking. Over the years, many patients have complained that they were not given quick access to their medical records, or had to contend with all sorts of red tape to obtain them. In some cases, that has led to delays in treatment and potential harm. The latest iteration of the ONC rule says that a healthcare provider is blocking information when the practice will  likely interfere with access, exchange, or use of protected EHR data.  Similarly, a developer engages in such blocking when it knows or should know that it’s interfering with access exchange or use of EHR data. As before, ONC lists several exemptions. One new exemption added to the rule involves practices related to actors participation in The Trusted Exchange Framework and Common Agreement (TEFCA).

Despite the emphasis of adequately tested predictive algorithms, ONC is quick to point out that it will not be testing or approving these digital tools. The purpose of the new rule is to provide others with enough information to determine if the products and services they are investing in are FAVES: fair, appropriate, valid, effective, and safe. 


Recent Posts