Decoding HIV Stigma: NLP Models Take the Lead
New NLP tools aim to identify HIV-related stigma in clinical notes. Researchers at UF Health tested various models, with GatorTron-large showing promise.
Understanding how stigma affects people living with HIV isn't just academic, it's vital. The stigma surrounding HIV can severely impact mental health and treatment outcomes. Yet, the ability to pinpoint and categorize these experiences from clinical notes remains elusive. Enter the latest study from the University of Florida Health. Their team has developed a novel NLP tool to identify HIV stigma documented in clinical narratives.
HIV Stigma Unveiled
Researchers compiled clinical notes from patients receiving care at UF Health between 2012 and 2022. Using a mix of expert-curated keywords and clinical word embeddings, they identified candidate sentences reflecting stigma. The results? A staggering 1,332 sentences, each manually annotated across four distinct stigma subscales: Concern with Public Attitudes, Disclosure Concerns, Negative Self-Image, and Personalized Stigma.
In a world obsessed with AI models, it's refreshing to witness a real-world application of these technologies. But which models shined the brightest? GatorTron-large, an encoder-based model, led the pack with a Micro F1 score of 0.62. Meanwhile, generative models like GPT-OSS-20B and LLaMA-8B weren't far behind, especially with few-shot prompting, achieving scores of 0.57 and 0.59, respectively.
Generative Models: A Mixed Bag
While few-shot prompting improved performance, zero-shot generative inference wasn't without its flaws, showing failure rates up to 32%. That's not just a number, it's a gap in reliability that could have significant implications for clinical settings. If you're betting the farm on generative models, you'd better have a solid fallback.
What's particularly intriguing here's the variability in model performance across the stigma subscales. Negative Self-Image was the most easily recognizable trait, while Personalized Stigma posed the greatest challenge. This variability raises a important question: Can we ever trust an AI to fully capture the nuances of human stigma?
The Road Ahead
This research marks the first practical step towards an NLP tool for identifying HIV stigma in clinical settings. Yet, it's clear that we're still wrestling with challenges of accuracy and reliability. The intersection of AI and healthcare is real. Ninety percent of the projects aren't. But for the ones that are, the stakes couldn't be higher.
As we forge ahead, one thing's clear: slapping a model on a GPU rental isn't a convergence thesis. Show me the inference costs. Then we'll talk about scalability and real-world applications. Until then, the debate rages on, do AI models offer a glimpse into a stigma-free future or are they just another piece of vaporware in a crowded AI marketplace?
Get AI news in your inbox
Daily digest of what matters in AI.