Are You a Rebel without a Cause?

Questioning the status quo and challenging conventional wisdom can result in innovative solutions to once intractable problems, but we must avoid reflexive contrarianism.

By John Halamka, M.D., President, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform.

Most adolescents go through a developmental phase in which they see the need to become their own person, and look for ways to exert their independence. Many challenge the traditions and rules their families have laid down during early childhood, rebelling against what they perceive as unjustified restrictions on their personal freedom. Gradually, as they enter adulthood, they outgrow this rebel without a cause mindset, choosing to focus on productive change. However, some maintain what Nobel Prize winning economist Paul Krugman calls “reflexive contrarianism.” They take pride in disregarding conventional wisdom and expert opinion, suggesting that we must move fast and break things.

As healthcare stakeholders explore generative AI, we must do so with humility. Large language models offer us new possibilities but also create new responsibilities. At this time in history there is no absolute right and wrong. There is no one expert with all the answers. There are many use cases to explore, enabling disruption of the status quo; but that must be done within guardrails and guidelines so that we do no "digital harm."

Several key components are necessary for the successful development and deployment of a large language model, including:

  • Extensive collection and training of data. Large language models (LLMs) are built on vast amounts of diverse training data to learn from. More data being used for training will typically result in better results. For healthcare applications, the training data needs to be “curated” (e.g., medical records, medical texts, and resources, etc.) so that it is relevant and topical.
  • Pre-training and fine tuning. LLMs are typically pre-trained on a massive corpus of text data using unsupervised learning techniques. Pre-training involves training the model to minimize the differences between its predictions (e.g., the generated text responses) and the actual data or desired response. After pre-training, additional more detailed training or fine tuning is performed on more specific and use case relevant tasks or data sets.
  • Powerful computing resources. Training and running large language models demand high performance computational resources and storage capacity. These resources are needed to handle the complexity involved in training the model. This process essentially creates a brain-like neural network of computer network “neurons” with hundreds of millions or even billions of connections and relationship that need to be trained so that accurate text responses are generated.
  • Governance framework. Successful design, development, deployment, and use of an LLM requires an AI governance and risk management framework (e.g., policies and procedures, accountability, ownership, oversight, transparency, ethics, regulatory and legal compliance, etc.) to mitigate and manage risks and requirements associated with generative AI solutions.

Additional components include advanced model architecture; training, deployment and operating infrastructure; operation and maintenance; and adequate financial resources.

Organizations interested in putting LLMs in place need to have the bandwidth to deliver all these components, or partner with trusted third parties that can deliver them. 

Challenging conventional wisdom has generated many innovative products and services in healthcare, but the next generation of AI tools is best adopted as the right mover, not necessarily the first mover.


Recent Posts