Here’s to the Cautious Ones

The latest proposed rules from CMS and FDA emphasize the need for a balanced, nuanced approach to digital innovation.

By John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform

Last week we penned a blog, Here’s to the Crazy Ones, which sang the praises of fearless innovators who have the courage to question received wisdom and challenge the status quo in medicine and technology. Although their determination to bring new solutions to market is praiseworthy, there is such a thing as too much unconventional thinking, especially if it results in unsafe clinical decisions that do more harm than good. With this caveat in mind, the federal government recently issued two sets of proposed guidelines to help developers and health care providers navigate the complexities of clinical decision support software.

The Centers of Medicare and Medicaid Services (CMS) issued a proposed rule on discrimination that includes guidance for the use of AI in clinical practice. It suggests that "Augmented Intelligence" rather than "Artificial Intelligence" is a best practice. Using algorithms without human review/oversight could result in patient harm and increased liability, while using algorithms as an adjunct to existing human driven processes is much less likely to pose these problems. Similarly, the Food and Drug Administration (FDA) released a seminal proposed rule on the regulation of health care AI. The rule provides a framework for evaluation of all AI innovations, determining which ones will be considered medical devices subject to FDA oversight using four "tests" and numerous use case examples.

The proposed CMS rule is designed to provide practical advice on how providers and other entities can comply with Section 1557 of the Affordable Care Act. That section prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in certain health programs and activities. More specifically, its purpose is “to identify and prevent discrimination based upon the use of clinical decision tools and technological innovation in health care. The rule goes on to state:

“For example, covered entities may choose to establish written policies and procedures governing how information from clinical algorithms will be used in decision-making; monitor any potential impacts; and train staff on the proper use of such systems in decision-making. …While covered entities are not liable for clinical algorithms that they did not develop, they may be held liable under this provision for their decisions made in reliance on clinical algorithms.

"Covered entities using clinical algorithms in their decision-making should consider clinical algorithms as a tool that supplements their decision-making, rather than as a replacement of their clinical judgment. By overrelying on a clinical algorithm in their decision-making, such as by replacing or substituting their own clinical judgment with a clinical algorithm, a covered entity may risk violating Section 1557 if their decision rests upon or results in discrimination.”

The FDA proposed rule, on the other hand, provides specific guidelines to help developers and users differentiate between AI-based algorithms or other clinical decision support systems that are and are not considered medical devices. It lists the following four criteria to be met before one can label such digital tools as not being a medical device that requires the agency’s approval:

  • The software does NOT acquire, process, or analyze medical images, signals, or patterns.
  • The software displays, analyzes, or prints medical information normally communicated between health care professionals (HCPs).
  • The software provides recommendations (information/options) to a HCP rather than provide a specific output or directive.
  • The software provides the basis of the recommendations so that the HCP does not rely primarily on any recommendations to make a decision.

According to criteria 1 and 2: non-device examples can display, analyze, or print the following examples of medical information, which must also not be images, signals, or patterns:

  • The information that the CDS system provides is well understood.
  • The tool provides a single discrete test result that is clinically meaningful.
  • A report from an imaging study fits these criteria.

According to the third criterion, non-device examples provide lists of preventive, diagnostic, or treatment options, clinical guidance matched to patient-specific medical info, or relevant reference information about a disease or condition. According to criterion 4, non-device examples provide plain language descriptions of the software’s purpose, medical input, underlying algorithm, or relevant patient-specific information and other knowns/unknowns for consideration. The agency emphasizes that all four criteria must be met.

On the other hand, among the CDS tools that are considered software as a medical device (SAMD) and subject to stricter regulation: continuous glucose monitoring (CGM) systems, computer aided detection/diagnosis (CADe/CADx), medical images, waveforms (ECG), and apps that provide risk scores for a disease.

The rules provided by CMS and FDA remind us all that advances in health care informatics and clinical care require a delicate balance between crazy and cautious.

Our Coalition for Heath AI shares many of the same goals, including the removal of bias in health algorithms, and is looking forward to offering our support and expertise as the policy process evolves.

Recent Posts