Harnessing AI to ‘Make it Easier for Radiologists to Practice Better’
2019 Imaging Innovation Award Winner: recoMD
By Radiology Partners
In 2015, our practice began a quality-improvement project aimed at decreasing variability in our radiology reports. We used a change management process to help our radiologists provide evidence-based Best Practice Recommendations (BPRs) for several incidental pathologies. Because no tools were available to help at the time, our radiologists had to remember to use our BPRs whenever they came across a relevant incidental lesion. If they remembered, they needed to find and apply the correct logic to identify the appropriate follow-up recommendation and input this recommendation into their report. To provide feedback to our rads, analysts in our practice evaluated hundreds of relevant radiologist reports for adherence to our BPRs and provided those scores to our local practices. This feedback process is an essential component of change management and worked well for some time. However, as we scaled both our practice size (i.e., more reports to review), the number of BPRs (i.e., more reports to review), and the complexity of our BPRs (e.g., incidental adrenal lesions) we realized both our rads and our analysts would soon get overwhelmed, and change management on its own would not be effective. We needed a tool to assist both groups in order to scale.
Aims and objectives. The main aim of the project was to improve the consistency and accuracy of follow-up recommendations—specifically, to ensure reported recommendations are population health-driven and evidence-based. If we were able to do this, we knew we would not only be able to improve patient care (all BPRs) but also to save lives (abdominal aortic aneurysm BPR), decrease costs (incidental thyroid nodule BPR), improve referring physician satisfaction (all BPRs), and improve radiologist workflow and efficiency (all BPRs). In order to accomplish this lofty goal, we set out to create an artificial intelligence (AI) tool based on natural language processing (NLP) that would a.) provide the appropriate follow-up recommendation at the point of dictation and within the radiologists’ workflow using the dictated pathology and BPR logic, and b.) assist our analysts in evaluating radiologist BPR compliance.
If both components succeeded, we knew we could scale our BPR program from few to many and include BPRs that were previously too complicated to deploy with change management alone. We also would be able to continue to scale the size of our practice and roll out our BPR program to our new practices without overwhelming our radiologists or the internal resources who were evaluating radiologist BPR compliance.
Leadership and project management. We organized a multidisciplinary team with representation from IT infrastructure, IT applications, data scientists, radiologists and analysts from our clinical value team (CVT). A radiologist who oversees our clinical data science and analytics team provided strategic guidance, and a project manager kept the group on track. Interestingly, we learned that communication between IT, data science (DS) and radiologists is not automatic. It takes work to ensure group members deliver the message in a way that other members can understand. As an example, when we first began working with the DS team, the initial product was not functioning as desired. With that realization, we revisited how we communicated with the DS team.
Instead of remotely providing information about the BPRs, we flew onsite and spent two to three days with the DS team discussing anatomy, physiology and terminology of the relevant structures before delving into the BPR logic. We spent the final day reviewing radiology reports clarifying how the logic would be applied. This communication change transformed our interaction and enabled the DS team to produce a highly accurate product. We call our AI product “recoMD.” We promoted it by creating a video advertisement, discussing it at our all-practice meetings, advertising it through blog posts and demonstrating it with multiple local practices. We also were intentional about training each radiologist to ensure a successful roll out. Our attention to process has been effective: Radiologists and practices are now requesting recoMD.
Key steps. Deploying recoMD with our radiologists revolved around radiologist education and communication. Specifically, we provided each radiologist with information about why the tool was created, its capabilities, how the tool fits into the normal radiologist workflow and how the radiologists and the care they provide could benefit by using it. We looked at the education from the radiologist’s perspective—what was important to them—and created our training materials and training process with that in mind. We begin every rollout with an onsite group presentation between our clinical lead (a radiologist) and the local radiologists. This one-hour session focuses on why recoMD was created and how it helps the radiologists. The clinical lead also reviews recoMD’s functionality, describes how to use it and discusses potential future developments. The session ends with a live demo. Over the subsequent week(s), the radiologists are individually trained on the tool. Each spends anywhere from 15 to 30 minutes with a clinical trainer and IT staff member who show them how to use recoMD on their workstation and let them practice using it on live cases. Contacts are shared for follow-up questions. We provide additional hardcopy FAQs and cheat sheets, and we make digital information about the tool available to all radiologists on our internal practice-wide website. Interaction between the radiologist and our training team continues through the many feedback mechanisms available within the tool. The CVT evaluates and responds to every piece of feedback sent by a radiologist.
Positive outcomes. After the implementation of our recoMD tool, radiologist adherence to BPRs and billing conditions significantly increased. In fact, the first practice for which recoMD was implemented improved their BPR adherence by up to 83%. Specifically, performance in this practice increased from 75% to 100% on ovarian cysts, from 61% to 92% on abdominal aortic aneurysms and from 54% to 99% on incidental thyroid nodules. Practices also improved their adherence to required billing conditions (including MIPS measures) by up to 86%, with 58% improvement on MIPS measure 436, 61% improvement on MIPS 145, 81% improvement on properly documenting CTA studies and 86% improvement in providing a billable history. In addition, both user logs and radiologist feedback have validated that our radiologists are consistently using the tool. Although it seems to be well accepted since implementation, this project is far from complete. Our goal is to continually add functionality and make improvements. Aside from feedback we receive from the radiologists through the tool, surveys have been used to gather additional feedback. This information has directed improvements to both the data science logic and the user interface. New BPRs also are being consistently added.
Innovative elements. We were successful in achieving the main goal by ensuring recoMD was thoroughly integrated into the radiologist’s workflow. No tool—no matter how good it is—will be used if it is not accepted by radiologists. Our clinical value team’s mission, “make it easier for radiologists to practice better,” drove our success. Significant thought and effort were applied across the development team, and through feedback from our users, to ensure that we could innovate to accomplish this aim. The other important driver to the team’s innovation came from the close collaboration between the subject matter experts—radiologists—and the data science team. This collaboration improved the statistical accuracy of the tool to well above that of typical machine learning NLP algorithms.
AI products are supposed to benefit users by building products that “think” like humans. Our product is built to take away some of the reasoning and memorization required for radiologists to apply all the billing rules and appropriate best practice follow-up recommendations for each clinical situation so they can focus more fully on the patient. Because no machine learning algorithm is perfect, our DS team has invested significant time to learn anatomy and terminology from radiology reports to maximize our knowledge of how radiologists dictate so that our product can meet high accuracy standards. Our success metrics are also driven by subjective measures of radiologist perception and feedback, such as how much of an imposition certain alerts are and which features do the most to streamline their workflow.
Submitted by Nina Kottler, MD, VP of clinical operations for Radiology Partners in El Segundo, Calif.
2019 Imaging Innovation Awards: Meet the Defending Champions
Winners Turn Good Ideas Into Outstanding Advancements
By Radiology Business Staff | Read Introduction
Alerting Busy Providers Without Needlessly Interrupting Their EHR Workflows
By UT Southwestern Medical Center at Dallas and Parkland Health & Hospital System | Read Case Study
Taking Radiologist Recommendations to the Next Level
By Radiology Group of Abington | Read Case Study
Taking Charge of Change to Standardize Care Across Many Sites
By Integrated Imaging Consultants | Read Case Study
Precisely Targeting Gadolinium for MS Patients Who Truly Need It
By the Department of Radiology at the Hospital of the University of Pennsylvania | Read Case Study