© 2024 Connecticut Public

FCC Public Inspection Files:
WEDH · WEDN · WEDW · WEDY
WECS · WEDW-FM · WNPR · WPKT · WRLI-FM · WVOF
Public Files Contact · ATSC 3.0 FAQ
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Yale study finds racial bias in ChatGPT's radiology reports

Rheumatoid arthritis seen on an x-ray of the hand.
BSIP
/
Getty
Researchers found ChatGPT simplified the response to lower reading levels depending on what race was shared.

A recent Yale study in the journal Clinical Imaging raises concerns over racial bias when Open AI’s GPT Chat 3.5 and GPT 4.0 are provided with patients’ race.

Some racial groups are at higher risk than others for certain diseases, and that’s information that could be helpful to artificial intelligence in health care. But sharing racial information with AI can have other effects besides delivering medical information, according to Dr. Melissa Davis, co-author and vice chair of Medical Informatics at Yale, and associate professor at the Department of Radiology and Biomedical Imaging.

Researchers asked Open AI's ChatGPT to simplify more than 700 Yale radiology reports. So if a radiology report said a patient has “Kerley B lines,” ChatGPT would essentially translate that to something most people could understand — there’s extra fluid in the lungs that shouldn't be there.

But when ChatGPT was also told the race of the person asking to simplify the medical language, researchers found ChatGPT simplified the response to lower reading levels depending on what race was shared.

“We found that white and Asian patients typically had a higher reading grade level,” Davis said. “If I said 'I am a Black patient,' or 'I am an American Indian patient' or 'Alaskan Native native patient,' the reading grade levels would actually drop.”

She said even though ChatGPT did an excellent job of simplifying the reports, the findings flag that inputting patients’ race as a socio-economic determinant of health should not be disclosed to large language models. Instead, the way patients absorb medical information could be based on something else, like education level or age.

Utopian, dystopian, or a mix of both, AI is expanding in health care across Connecticut and nationally.

In February, Hartford HealthCare launched the Center for AI Innovation in Healthcare, a collaboration with the Massachusetts Institute of Technology and Oxford University.

“AI stands poised to profoundly reshape health care delivery, impacting access, affordability, equity and excellence,” said Dr. Barry Stein, the group’s chief clinical innovation officer.

Researchers at UConn Health are using AI to help diagnose lung cancer in the early stages.

“It’s all about stage shift,” said Dr. Omar Ibrahim, associate professor of medicine and director of interventional pulmonology at UConn Health. “Using the AI technology of Virtual Nodule Clinic, I can find the patients who need care, before they even realize it themselves, and get them treated at the earliest possible stage.”

Sujata Srinivasan is Connecticut Public Radio’s senior health reporter. Prior to that, she was a senior producer for Where We Live, a newsroom editor, and from 2010-2014, a business reporter for the station.

Stand up for civility

This news story is funded in large part by Connecticut Public’s Members — listeners, viewers, and readers like you who value fact-based journalism and trustworthy information.

We hope their support inspires you to donate so that we can continue telling stories that inform, educate, and inspire you and your neighbors. As a community-supported public media service, Connecticut Public has relied on donor support for more than 50 years.

Your donation today will allow us to continue this work on your behalf. Give today at any amount and join the 50,000 members who are building a better—and more civil—Connecticut to live, work, and play.

Related Content
Connecticut Public’s journalism is made possible, in part by funding from Jeffrey Hoffman and Robert Jaeger.