Close

Presentation

Large Language Models for Proactive Learning in Healthcare: A Comparative Study of Traditional Incident Reporting and Resilience Engineering Approaches
DescriptionThis research explores the application of generative Large Language Models (LLMs) to learn at scale from self-reports in healthcare systems. Traditional Incident Reporting Systems (IRS) often suffer from underreporting and are generally ineffective in improving patient safety due to the overwhelming volume of data, lack of actionable insights, and the discouragement of proactive learning. In contrast, Resilience Engineering (RE) emphasizes continuous learning from routine operations and proactive adaptation to challenges. The Resilience Engineering Tools to Improve Patient Safety (RETIPS) addresses these IRS limitations by capturing narratives of adaptation from frontline workers during crises like the COVID-19 pandemic. The study assesses the performance of IRS and RETIPS by utilizing LLMs to effectively analyze incident reports and worker narratives. By integrating LLMs with both IRS and RETIPS, we aim to improve safety and care quality through proactive learning and recognition of performance challenges to enhance the adaptability and resilience of healthcare systems.
Event Type
Lecture
TimeWednesday, September 11th3pm - 3:20pm MST
LocationFLW Salon C
Tracks
Cognitive Engineering & Decision Making
Topics
DEI