New study reveals high rates of fabricated and inaccurate citations in LLM-generated mental health research

A new study published in the journal JMIR Mental Health by JMIR Publications highlights a critical risk in the growing use of Large Language Models (LLMs) like GPT-4o by researchers: the frequent fabrication and inaccuracy of bibliographic citations. The findings underscore an urgent need for rigorous human verification and institutional safeguards to protect research integrity, particularly in specialized and less publicly known fields within mental health.

This post was originally published on this site

Skip The Dishes Referral Code