AI Regulations Require a Nuanced Approach
The U.S. government and the European Union both realize the need for measured control over the latest technology. Implement too many regulations too quickly and you stifle innovation. Move too slowly and you risk harm from irresponsible players in the field.
By John Halamka, M.D., President, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform.
It’s no exaggeration to say that AI, including generative AI and large language models, is transforming healthcare. When carefully executed, it can:
- Reduce administrative burden, including cognitive load, efficiency, and writing prior authorizations
- Improve customer service, including medical benefits administration
- Reduce physician and nurse burnout, through ambient listening tools linked to EHRs, which in turn will reduce documentation time
But thought leaders throughout the healthcare ecosystem are also acutely aware of the potential harm that the latest AI tools can do in the wrong hands. With these concerns in mind, the U.S. president recently issued an executive order on safe, secure, and trustworthy AI that is bound to impact developers, tech companies, and providers alike. To protect the public from the potential risks of AI system, the order requires:
- Developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy
- Protect against the risks of using AI to engineer dangerous biological materials
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
- Order the development of a National Security Memorandum that directs further actions on AI and security
As STAT News recently pointed out, the new directive instructs HHS to create a system that accepts reports from users about AI dangers and unsafe practices, and to do something about them. But the Stat reporter also highlighted the fact that the United States is behind the European Union (EU) and the standards it's developing to provide guardrails for healthcare AI. In June 2023, EU passed the text of the Artificial Intelligence Act, which will now let the European Council debate the final details to formally introduce the AI Act into law. It includes a list of prohibited AI practices that pose an unacceptable risk to public safety and individuals’ risks. The list includes:
- “The use of facial recognition technology in public places;
- AI which may influence political campaigns;
- Social scoring AI which classifies people based on certain characteristics or behaviours
- Emotion recognition AI.”
Offenders who violate these prohibitions can be fined up to 40 million euros.
The regulations enacted by the United States and Europe may not address all the challenges that have surfaced since the introduction of ChatGPT-4 and similar LLMs. But they’re an important step in the right direction.
Recent Posts
By John Halamka, Paul Cerrato, and Teresa Atkinson — Many clinicians are well aware of the shortcomings of LLMs, but studies suggest that retrieval-augmented generation could help address these problems.
By John Halamka and Paul Cerrato — Large language models rely on complex technology, but a plain English tutorial makes it clear that they use math, not magic to render their impressive results.
By John Halamka and Paul Cerrato — Many algorithms only reinforce a person’s narrow point of view, or encourage existing prejudices. There are better alternatives.