Mount Sinai Medical Center: Implementation of Decision Support for Radiology Orders
The rapid deployment of EMRs—in parallel with the extensive penetration of digital systems into radiology—has produced an environment that will enable many new technologies, including clinical decision support, to be introduced into the health-care system. There has long been interest in clinical decision support, especially at the time of order entry, but a lack of systems (and credentialed guidelines) has limited clinical use. In the realm of imaging, it is hoped that the implementation of radiology decision support will have an impact on inappropriate utilization and will thereby increase the quality (and diminish the cost) of imaging within the overall health-care system. Such solutions are now moving into commercially available systems. While we might expect growing pains, we simultaneously should look for enhancements in the quality of care that we can deliver to our patients. New opportunities have arrived; we need to take our new tools and put them to use. At MSMC, our first efforts to implement a radiology decision-support system made us early adopters of commercial offerings. MSMC encompasses both the Mount Sinai Hospital and the Mount Sinai School of Medicine. The Mount Sinai Hospital, founded in 1852, is a 1,171-bed tertiary- and quaternary-care teaching facility and is one of the nation’s oldest, largest, and most respected voluntary hospitals. Nearly 60,000 people were treated at MSMC as inpatients in 2011, and approximately a million outpatient visits took place. Due to MSMC’s location on the Upper East Side of New York City, the hospital is at the intersection of some of the wealthiest and poorest zip codes in the United States; it has the responsibility of meeting the unique medical needs of both patients from affluent backgrounds and those requiring indigent care. MSMC’s focus is on seamless care coordination across all ambulatory, inpatient, and emergency-department settings—as well as on providing access to patients’ records to their clinicians. This is being supported using a well-known, broadly deployed integrated EMR at MSMC. In addition, MSMC is also implementing a wide array of advanced clinical processes that are made possible by the EMR. The primary drivers are quality, safety, and efficiency. We spent approximately two years exploring potential decision-support solutions for radiology. In parallel, the ACR® had made a decision to make its well-known ACR Appropriateness Criteria® available (in a format that can easily be consumed by downstream systems) via ACR Select—the Web-service version of the appropriateness criteria and exclusive distributor of the guidelines. We were positioned to purchase a solution concurrent with the release of the ACR Select product; simultaneously, our EMR vendor adopted the approach of establishing a transparent means of importing and integrating credentialed rule sets for clinical decision support for radiology. These two converging strategies have facilitated our current approach. The ACR provides an authoritative source of clinical decision-support rules, meant to be consumed as a Web service by multiple EMRs, so the solution is vendor agnostic. In fact, this approach makes initial implementation fairly straightforward, requiring little technical effort (on the order of 40 hours of work). This implementation represents a first version and an early effort of the EMR vendor to integrate the ACR Select rule set, and as such, it had many constraints. The ACR Select product provides the vendor with lists of indications. The EMR then needs to orchestrate the presentation of these indications. In this first version, there was little flexibility regarding the structure and presentation of these indication lists. The lists were provided as groups, and one could combine these groups to build a list that would be presented to the ordering provider at the time of order entry. In essence, a site had a choice of providing long lists—quite inclusive of every possible indication for an exam—versus short lists of common indications (excluding rarer indications). As an academic institution with many specialty providers, we initially elected to provide the long, granular lists of indications. We had a small group of radiologists and clinicians review the lists and decide which to include and which to exclude. Our local EMR team then did the work to integrate these indications with our EMR. The Data-collection Phase Phase 1 of radiology decision support was implemented in March 2013. This was intended as a baseline data-collection phase. When exam indications are selected, they are sent to ACR Select servers for the return of a decision-support score to the EMR via Web-services interface. The scores can range from 1 to 9. A score of 7 to 9 indicates that the order is deemed highly appropriate. Scores in the range of 1 to 3 suggest that the indication for the exam does not meet the standard recommendations of the ACR Appropriateness Criteria; scores of 4 to 6 suggest an intermediate level of compliance with the guidelines. This does not necessarily mean that it is wrong to order the exam, but the indication should undergo further scrutiny to determine why the specific exam would be justified. All agree that there will be fully justified outliers. This phase is intended for data collection, validation, and initial analysis. Active decision support was not included in this phase; that is, the scores were not presented to the ordering provider, but were kept in a database in background. An analysis of these data will be employed to help guide phase 2 of the project, in which active decision support will be rolled out to ambulatory and inpatient settings. We will use these data to improve our understanding of the patterns of ordering at our institution. The data will be used as quality-assessment and education tools. We are looking for providers who consistently obtain low scores (1 to 3) for the exams that they order. Senior clinical and radiology staff will privately sit with these providers, review their personalized data, and try to understand the anomalous ordering pattern. If the provider cannot justify the deviation from standard practice, education will be provided. When this step is completed, we intend to turn on the feedback mechanism. At that time, scores below 7 will be shown to ordering providers using a pop-up screen. They will be given the opportunity to change their orders and/or provide additional information. Recommendations for other imaging exams that match the provided indication better are listed on this screen. It is important to note that we will not prevent the provider from moving forward with the original order. Our staff will be informed that this is an ongoing quality-assurance activity, and the data will continually undergo review. Initial Response Radiology decision support is currently in use in our emergency-department and inpatient settings. Shortly before going live, we made several broadcast announcements to reach the various providers who order imaging exams; they include physicians, nurse practitioners, and physician assistants. In addition, within the EMR, we provided instructions regarding the new appearance of the indication screens and how to respond to them. ACR Select currently covers CT and MRI. Our local analytics group developed reports that we run weekly; they enable us to assess utilization and radiology decision-support scores by ordering provider, attending physician, and location. Compliance with selecting a discrete indication of use for CT and MRI orders has been approximately 65% to 70%, with particular challenges seen in the emergency-department setting. Without a discrete indication being chosen, no decision-support score is returned, which prevents meaningful data gathering (and informed reporting). It is important to note that our implementation permits a workaround (typed-in free text) that bypasses the ACR Select scoring system. There is an alternative available that would provide a hard stop if the user chose no ACR Select indication. We deliberately have chosen not to implement that hard-stop alternative. Initial feedback has indicated that clinicians have not always appreciated (or understood) that we would prefer the discrete indication to the free-text information, despite the education that we provided one to two weeks before the system’s implementation. We would like to avoid making this introduction confrontational, so we have elected to extend the educational campaign. We have started to attend departmental grand rounds, to contact department chairs, and to extend these educational efforts. Initial reporting has focused on score distribution (appropriateness of orders) and on the volume of radiology orders. Although the clinicians are not receiving decision-support feedback, we have seen certain trends consistently emerge. On a weekly basis, 8% to 10% of orders are falling into the category indicating that they are inappropriate. While many providers sporadically receive such a score, there are a few who have a significant percentage of their orders in this category each week. This group represents a target for this initiative to improve care and diminish inappropriate utilization. Once active decision support is in place, these reports can be used to monitor overall volume and behavioral changes (for example, how many orders are started, but are not placed—or are changed to more appropriate orders). Challenges and Feedback We have had no unsolicited negative reactions to our introduction of radiology decision support. The use of free text by a significant number of providers, however, might (in part) have represented such a reaction. When we reviewed our data, it became clear that a large volume of the free-text entries arose in our emergency department—a fast-paced care setting and a high-volume user of radiology exams. A focus session with members of the emergency-department staff identified three main challenges. First, the long list of indications—with more than 100 possible selections, in some cases—caused many providers to use the free-text option instead of scrolling through the list. Having too many choices available when ordering some tests discourages users (especially occasional users) from perusing the lists. Second, there is no ability to search the list of indications, nor are they grouped in a meaningful clinical manner. The list of indications is alphabetized; however, indications are not always intuitively named. For example, loss of consciousness (as a synonym for syncope) is listed not under L, for loss, but rather, under E, for episode of loss of consciousness. Some of the indications are so similar as to seem to be duplicates, and this contributes to the length of the lists. Third, there are interface limitations. Indications are often truncated, so the cursor must be made to hover over them before the user can see the full text. Many of these observations were confirmed in other focus sessions and discussions. We also learned that despite our efforts, many providers remained unaware of the overall program and its goals. Next Steps We quickly decided to address the first issue, long lists, by confining our system to using the shorter lists (most common indications). Orders with a high volume of noncompliance are being updated with truncated lists of indications, within the constraints of the EMR system. We did this first for exams of the central nervous system, and we achieved an improvement in compliance of as much as 10% for these exams. We are in the process of providing shortened lists for all other studies that have long indication lists. These will be monitored to determine the effectiveness of the changes. In addition, radiology reached out to specific department leaders, requesting increased compliance with the initiative. Process instructions within the EMR also were updated; the new wording asks users to select a discrete indication for the ordered exam. Our EMR vendor is aware of many of the current limitations of the current implementation. It has done significant development work concerning the indications-of-use section in its 2014 release, and we have requested an update/patch to the existing release. These functionality changes to the EMR would not require a change in indication mapping or in the Web-services interface. Despite the growing pains that we are experiencing, we believe that our current efforts are valuable. They have not been disruptive, and they have provided an early experience that will contribute to our ability to thrive in a new era of health-care delivery. We have already collected data indicating that there are providers who can improve their practices with clinical decision support and education. We would like to collect four to six months’ data in the background before turning on decision-support feedback for viewing by clinicians. The efficient, cost-effective delivery of care—with a concomitant commitment to raising the quality of care—requires us to implement (and evolve) the new tools that are available. Galen Kilroy is application manager for computerized provider order entry at the Mount Sinai Medical Center (MSMC), New York, New York. Paul Francaviglia is associate director of reporting and analytics at MSMC. Aditi Vakil is director of IT at MSMC. David S. Mendelson, MD, FACR, is director of radiology information systems for the department of radiology and senior associate in clinical informatics at MSMC.