RetCare: Towards Interpretable Clinical Decision Making through LLM-Driven Medical Knowledge Retrieval

Abstract

The integration of Electronic Health Record (EHR) data has greatly advanced clinical decision-making by providing vast amounts of patient information. However, despite significant progress in machine learning models for predicting patient outcomes, these models are rarely used in clinical practice due to their limited interpretability. To address this, we propose RetCare, a workflow that enhances model interpretability by incorporating authoritative medical literature. RetCare leverages the retrieval-augmented generation (RAG) pipeline, utilizing over two million entries from PubMed, combined with the zero-shot reasoning capabilities of large language models (LLM). Our approach focuses on validating machine learning outputs with references from authoritative sources to build clinician trust, developing comprehensive prompting strategies to integrate model outputs with healthcare context, and providing detailed, interpretable reasoning to support clinical decisions. Experimental results on two real-world datasets demonstrate that RetCare significantly improves the accuracy and reliability of model predictions, facilitating more informed and trustworthy clinical decision-making. The code is publicly released at https://github.com/PKU-AICare/RetCare.

Publication
KDD 2024 Workshop - Artificial Intelligence and Data Science for Healthcare: Bridging Data-Centric AI and People-Centric Healthcare