Digital Health Frontier Column
  • Can Healthcare Providers “Claw” Their Way Out of Their Administrative Burdens?

    3 minutes

The AI agent offers several labor-saving features but also poses several serious risks.

By John Halamka, M.D., M.S., Diercks President, Mayo Clinic Platform and Paul Cerrato, MA, senior research analyst and communications specialist, Mayo Clinic Platform and a Professor at Northeastern University.

OpenClaw has captured the imagination—and the pocketbooks—of several healthcare organizations.  Apparently they see the AI agent as a cost-effective solution to several problems. OpenClaw says it “clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.”  It is easy to see why overwhelmed healthcare providers might find these capabilities so attractive. They are using it to connect IT systems that don’t readily communicate with one another, including competing EHR systems, messaging apps, and web portals.

To understand OpenClaw’s potential benefits, and its risks, it helps to first understand how AI agents function in general. In a sense, an AI agent is a large language model on steroids. LLMs like ChatGPT, Claude, Gemini, and LLaMA are passive chatbots that can answer a long list of questions with detailed responses, some accurate some misleading. But they leave it up to the user to take action or ignore the information. AI agents actively put the search results to work. They act autonomously, oftentimes with minimal human oversight. As Amazon Web Services explains it: “An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use that data to perform self-directed tasks that meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.” By one estimate, about 80% of Fortune 500 companies use AI Agents.

Given all the potential benefits and risks, it’s only natural to wonder what kind of impact AI agents are currently having in the healthcare ecosystem. Researchers with Mount Sinai Health System in New York addressed this question by reviewing 20 studies that evaluated how these agents handled clinical reports, electromyography interpretation, multiple choice questions, evidence synthesis, and patient data. Their analysis found that single AI agents were quite good at performing medication dosing and evidence retrieval. More specifically, one study found a medication calculation tool improved performance by about 59%, when compared to ChatGPT-4o. Others found better “evidence gathering using domain-specific web search for oncology, orthopedics, and genomics.”

The enthusiasm about AI agents has affected decision makers considering OpenClaw as well.  According to one source, several UK hospitals have incorporated the software into their networks, including East Sussex Healthcare NHS Trust and Guy’s and St Thomas NHS Foundation Trust. In the latter deployment “an "end-to-end" pathway for lung cancer has been established. This pathway integrates Optellum AI risk stratification with robotic bronchoscopy, using OpenClaw agents to coordinate the movement of data between screening models and interventional hardware. By rapidly flagging nodules and guiding robotic biopsy tools with high precision, the system has replaced weeks of invasive testing with a single targeted procedure.”

Despite the growing enthusiasm about OpenClaw, several cybersecurity experts have warned about major risks when implementing the software.  Cisco’s AI blog calls OpenClaw “a security nightmare.”  Because it can run shell commands, as well as read, write, and execute scripts, it is capable of doing serious harm to a hospital’s network if it’s not configured correctly or if a person downloads a skill that contains malicious instructions. There is already evidence to show it is capable of leaking API keys and credentials.  And as Cisco points out: “ OpenClaw’s integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.”

On a more positive note, there are efforts being made to mitigate the risks associated with OpenClaw. Nvidia, for example, has developed NemoClaw. CNET refers to the tool as “a reference stack for the OpenClaw platform, providing a specialized infrastructure layer for easy installation with more security and privacy features.” Clearly, OpenClaw remains a work in progress.

Back to top