Demonstration data shows imperfect CDS performance, but one expert is optimistic
Clinical decision support (CDS) systems have significant difficulty matching imaging orders to appropriateness criteria, according to a recent study published in the Journal of the American Medical Association. But that may not tell the whole story of a CDS system’s worth.
In the JAMA study, Peter S. Hussey, senior policy researcher at the RAND Corporation, and colleagues examined data from the Medicare Imaging Demonstration (MID), which was collected from October 2011 to November 2013 by organizations in Massachusetts, Pennsylvania, New York, New Jersey, Michigan, Maine, Wisconsin and Texas.
Overall, the CDS systems had issues identifying the correct appropriateness criteria. In the baseline period, systems could not find the correct appropriateness criteria for 63% of the orders. That number was even higher – over 66% -- for the intervention period. This means that those exams could not be rated by the CDS system as “appropriate,” “equivocal” or “inappropriate,” which is the primary function of the system in the first place.
But at the end of the day, the systems were still effective, according to Gary Wendt, MD, professor of radiology, enterprise director of medical imaging and vice chair of informatics at the University of Wisconsin-Madison, one of the organizations that contributed to the MID project.
Wendt said the rest of the numbers must be taken into account in order to determine whether or not the systems worked. Appropriate orders rose from approximately 73% during the baseline period to 81% during the intervention period. Inappropriate orders, on the other hand, dropped from 11.1% during the baseline period to 6.4% during the intervention period.
“High-tech imaging is a critical part of providing medical care,” Wendt told RadiologyBusiness.com in a phone interview. “And maybe the number of exams isn’t the real thing [to focus on], but maybe look at making the ones you do more appropriate.”
Wendt said some problems may have been a result of CDS system design, especially that of a few years ago, when the data was being collected. For instance, clinicians often didn’t have all possible scenarios available to them as options when entering information into the system. This may have led more than a few to select an incorrect option, or just select “other” and move on.
“A lot of the orders didn’t have a clinical scenario because of the fact that the list of plausible exams was essentially incomplete,” Wendt said. “Oftentimes, you didn’t present enough possibilities, so I think that’s been one of the big things. We want to make sure we provide more appropriate indications.”
Wendt pointed out that CDS systems have improved in recent years. Thanks to organizations such as ACR dedicating time and resources to develop appropriateness criteria, he said, the list of options for clinicians to enter is much more complete.
In fact, an ACR statement about RAND’s study of MID data points out that it does not reflect the capabilities of modern CDS technology, saying the findings are now “obsolete due to more recent technological advancements.”