Close

Presentation

Assessing GenAI's Intra-Agent Reliability in Coding Unstructured Human Factors Data within Patient Safety Reports: An Empirical Investigation
DescriptionThe rise of large language models (LLMs) and Generative Pre-trained Transformers (GPTs) has sparked interest in leveraging specialized, generative AI (GenAI) agents for analyzing and coding human factors data in patient safety reports. This has led to questions about these agents' capabilities to detect human factor (HF) issues within unstructured, narrative data and then reliably code them using various human factors frameworks. To explore this potential, our lab is conducting systematic experiments with various specialized GPT agent configurations, focusing on enhancing their ability to analyze patient harm reports to identify HF issues. Our preliminary results are encouraging, demonstrating generative AI's transformative power in processing unstructured data. These findings suggest a future where AI not only supports but significantly advances analytical methods. However, we proceed cautiously, mindful of the challenges and limitations at this early adoption stage, including issues with GenAI’s output reliability and interpretability.
Event Type
Lecture
TimeTuesday, September 10th3:20pm - 3:40pm MST
LocationGrand Ballroom
Tracks
Health Care