Specialists still seeking the best way to measure performance in diagnostic radiology
Effectively and accurately measuring the performance of diagnostic radiologists has been a significant challenge for many years now, a puzzle that specialists within the industry have yet to solve. The authors of a new analysis published in the Journal of the American College of Radiology reflected on where prior attempts to measure performance in diagnostic radiology have missed the mark and looked ahead at potential solutions.
Peer-to-peer measurements such as the RADPEER scoring system are popular, for instance, but they are known to suffer from sampling bias, reviewer bias and the fact that a large sample size is required to note any significant differences between providers. Another issue with peer review, the authors noted, is the variability that exists from one radiologist and the next.
In 2018, the America College of Radiology (ACR) proposed the use of 11 specific measures that can help measure a radiologist’s performance. All 11 either focus on language included in radiology reports or the use of low-dose CT examinations in certain situations. How well can they measure a specialist’s overall performance?
“Measurement of these objective outcomes can provide data revealing provider-level compliance with reported best practices, but these data have challenges of their own,” wrote authors Matthew S. Davenport, MD, of Michigan Medicine in Ann Arbor, and David B. Larson, MD, of the Stanford University School of Medicine. “Narrowly focusing on specific details for a few clinical conditions is unlikely to meaningfully improve care in a comprehensive way."
Also, the ACR’s 11 metrics were all “process measures” instead of “outcome measures.” While a process measure examines a radiologist’s performance, outcome measures examine the impact of a radiologist’s performance. For example, looking at a radiologist’s radiology report means you are studying a process measure. Looking at the patient outcomes that come as a result of ordering providers reading and following those radiology reports, however, means you are looking at an outcome measure.
So what is the specialty to do? Davenport and Larson observed that electronic health records could help researchers by making it easier to measure a single radiologist’s impact on patient care by “controlling for ordering provider, patient health, patient setting and other factors.”
In addition, they wrote that radiologists must reassess how they are connected to other healthcare providers and develop “clinically meaningful” practice standards that can be monitored over time.
“Such standards can only be derived and enforced in cooperation with relevant stakeholders outside the field of radiology, including patients and ordering providers,” the authors wrote.