Preventing Algorithms from Running Amok

Many algorithms only reinforce a person’s narrow point of view, or encourage existing prejudices. There are better alternatives.

By John Halamka, M.D., President, Mayo Clinic Platform and Paul Cerrato, MA, senior research analyst and communications specialist, Mayo Clinic Platform

Most film and TV fans are happy when their favorite streaming service recommends a good movie based on their previous viewing habits. Welcome to the age of algorithms. Unfortunately, while these recommendation models may provide entertaining options that you hadn’t considered, there’s a price to be paid: They usually reinforce one’s narrow view of the world of arts and entertainment. While that may not matter all that much to the average movie fan, it can pose real problems when such algorithms are used to suggest news stories on current events, shape your opinion on healthcare issues, or influence the decisions of medical professionals.

Evidence strongly suggests that social media sites like Facebook reinforce users’ prejudices and belief systems by feeding them stories with a point of view like their own. Users who lean liberal are likely to be encouraged to read new content that takes a liberal point of view and those with a conservative perspective are likely to be fed stories that agree with their point of view, in effect creating an echo chamber that’s sometimes referred to as a “filter bubble.” 

Independent studies conducted to determine whether this filter bubble actually exists were recently published in Science and Nature based on an analysis of data from Facebook users. They found “… the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews,…” On the other hand, exposure to this echo chamber did not seem to increase polarization among users.

As we have pointed out in the past, the data sets being used by many healthcare organizations are also biased and likely suffer from the same reinforcement problem. This has resulted in a long list of disparities that have impacted patient care. In 2022, BMJ Health & Care Informatics published a review in which we outlined several ways in which algorithms can be biased, thereby reinforcing prejudices against certain segments of the population, including people of color, women, and persons in lower socioeconomic groups.

One of the most glaring examples of such prejudice was documented by Ziad Obermeyer and his colleagues at the School of Public Health, University of California, Berkley and elsewhere. They identified over 43,000 white and more than 6,000 Black patients who were part of risk-based contracts that determined the eligibility for insured medical care. They found that at each risk score, Black patients were sicker than their white counterparts—based on the signs and symptoms. However, the commercial data set used to determine their eligibility to receive care did not recognize the greater disease burden in Black patients because it assigned risk scores based on total healthcare costs accrued in a year.

It doesn’t take a data scientist with an in-depth knowledge of algorithms to recognize the problem with this reasoning. As we pointed out in our BMJ HCI review: “Using this metric as a proxy for their medical need was flawed because the lower cost among Blacks may have been due to less access to care, which in turn resulted from their distrust of the healthcare system and direct racial discrimination from providers.”

Unfortunately, this type of reinforcement of social stereotypes still persists in healthcare. But on a more positive note, a long list of stakeholders are doing their part to chip away at these biases. The Coalition for Health AI, which was created to advocate for equitable, representative AI, includes many well-respected healthcare organizations, including:

  • AdventHealth
  • Boston Children’s Hospital
  • Duke Health
  • Johns Hopkins Medicine
  • Kaiser Permanente
  • Mass General Brigham
  • Mayo Clinic
  • MedStar Health
  • Mount Sinai Health System
  • National Health Council (NHC)
  • Providence
  • Penn Medicine
  • University of California Health System (including UC Davis Health, UCI Health, UCLA Health, UC Riverside Health, UC San Diego Health and the Jacobs Center for Health Innovation, and UCSF Health)
  • University of North Carolina Health
  • Sharp HealthCare
  • Stanford Medicine
  • Vanderbilt University Medical Center
  • Yale New Haven Health

These healthcare providers, in conjunction with several high-profile technology companies and federal agencies, can turn the tide, which it turn will create a more balanced, fair ecosystem.


Recent Posts