LLMs found using stigmatizing language about individuals with alcohol and substance use disorders

As artificial intelligence is rapidly developing and becoming a growing presence in health care communication, a new study addresses a concern that large language models (LLMs) can reinforce harmful stereotypes by using stigmatizing language. The study from researchers at Mass General Brigham found that more than 35% of responses in answers related to alcohol- and substance use-related conditions contained stigmatizing language. But the researchers also highlight that targeted prompts can be used to substantially reduce stigmatizing language in the LLMs’ answers. Results are published in The Journal of Addiction Medicine.

This post was originally published on this site

Skip The Dishes Referral Code