Home  
  Our Team  
  Patient Info  
  Services  
   
  Pay My Bill  
  Contact Us  

News for Healthier Living

AI Displays Racial Bias Evaluating Mental Health Cases

WEDNESDAY, July 9, 2025 (HealthDay News) — AI programs can exhibit racial bias when evaluating patients for mental health problems, a new study says.

Psychiatric recommendations from four large language models (LLMs) changed when a patient’s record noted they were African American, researchers recently reported in the journal NPJ Digital Medicine.

“Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient,” said senior researcher Elias Aboujaoude, director of the Program in Internet, Health and Society at Cedars-Sinai in Los Angeles.

“This bias was most evident in cases of schizophrenia and anxiety,” Aboujaoude added in a news release.

LLMs are trained on enormous amounts of data, which enables them to understand and generate human language, researchers said in background notes.

These AI programs are being tested for their potential to quickly evaluate patients and recommend diagnoses and treatments, researchers said.

For this study, researchers ran 10 hypothetical cases through four popular LLMs, including ChatGPT-4o, Google’s Gemini 1.5 Pro, Claude 3.5 Sonnet, and NewMes-v15, a freely available version of a Meta LLM.

For each case, the AI programs received three different versions of patient records: One that omitted reference to race, one that explicitly noted a patient was African American, and one that implied a patient’s race based on their name.

The AI often proposed different treatments when the records said or implied that a patient was African American, results show:

  • Two programs omitted medication recommendations for ADHD when race was explicitly stated.

  • Another AI suggested guardianship for Black depression patients.

  • One LLM showed increased focus on reducing alcohol use when evaluating African Americans with anxiety.

Aboujaoude theorizes the AIs displayed racial bias, because they picked it up from the content used to train them — essentially perpetuating inequalities that already exist in mental health care.

“The findings of this important study serve as a call to action for stakeholders across the healthcare ecosystem to ensure that LLM technologies enhance health equity rather than reproduce or worsen existing inequities,” David Underhill, chair of biomedical sciences at Cedars-Sinai, said in a news release.

“Until that goal is reached, such systems should be deployed with caution and consideration for how even subtle racial characteristics may affect their judgment,” added Underhill, who was not involved in the research.

More information

The Cleveland Clinic has more on AI in health care.

SOURCE: Cedars-Sinai, news release, June 30, 2025

July 9, 2025
Copyright © 2025 HealthDay. All rights reserved.


July 9 2025

July 8 2025

July 7 2025

July 6 2025

July 5 2025

July 4 2025

July 3 2025

July 2 2025

July 1 2025

June 30 2025

June 29 2025

June 28 2025

June 27 2025

June 26 2025

June 25 2025