Tracking Patient Data and Measuring Provider Outcomes: Keeping an Eye on Quality Through Turnaround Times, Satisfaction Scores and Peer Review

As healthcare policies continue to emphasize value-based care, the tracking and management of data has become more important in radiology than ever before. And while turnaround time was once the primary metric used for measuring quality of care and service throughout the imaging industry, it is no longer the only game in town.

Glenn Kaplan, MD, vice president of distributed radiology services at Sheridan Healthcare in Plantation, Fla., says there is a good reason turnaround time was the key metric for so long: it paints a clear picture of how efficiently the providers in question are moving.

“Turnaround time is a valid measure of better care because faster turnarounds mean people getting out of the emergency room faster, ER wait times are decreased and hospital length of stay improves,” Kaplan says. “The quicker a patient leaves the hospital, the better, because it decreases the possibility of a hospital-related complication.”

Kaplan adds that turnaround times continue to be significant, in part because of the advanced ways statistics are now being measured. But imaging leaders must keep in mind that speed is far from the only thing that matters, especially in today’s environment, where value is prioritized over volume.

“Turnaround time still has tremendous impact on patient care,” he says. “However, I caution, there has to be a distinction between ‘fast’ and ‘too fast.’ In today’s market, we’re approaching the latter. You can only push the speed curve so far without sacrificing quality.”

Measuring What Matters

When it comes to data management “measure what matters” is the advice of Richard Duszak, Jr., MD, professor and vice chair for health policy and practice in the department of radiology and imaging sciences at Emory University in Atlanta and an affiliate senior research fellow at the Harvey L. Neiman Health Policy Institute. For too long, measurable data have often focused on Medicare payments instead of measuring one doctor’s performance against another.

“When you focus on just counting widgets you can encourage some weird behavior,” Duszak says. “Turnaround time is an important metric. Ideally, you want to make your turnarounds as short as possible, so it’s a good metric in some situations—for example, in the emergency department, where time is of the essence. But some facilities measure turnaround time across all of their processes, and in some cases—like screening mammograms—it’s not as applicable.”

Tracking turnaround times without considering the proper context, Duszak warns, can cause providers to lose track of what’s most important: the quality of the care they are providing to patients. If administrators focus too much on how fast a specialist works, they are “incentivizing volume, but not value-driven care.”

Duszak prefers outcome-based measures of the patient’s overall care, but says overall care is hard to measure. He says data should address questions such as, “What happened because I made this interpretation or that phone call?” but many different factors have to be included before drawing a conclusion.

“So much happens downstream,” Duszak says. “I may provide a really good interpretation, and do all the right things, but so many variables are outside my control.” 

Tracking outcomes-based and value-based data is the goal, he adds, but while the industry works toward reaching that point, specialists must start asking themselves, “How often and how well did I have an extra communication about that imaging service?” Sometimes that means reaching out to the referring physician or speaking directly to the patient. This can take up a lot of a specialist’s time, he adds, “but going above and beyond just generating reports and tracking extra communications is important.”

Patient satisfaction is another common performance metric in healthcare, but Duszak says many radiologists have little use for it. But take note, he urges, that patient satisfaction scores will gain importance in terms of compensation and standing in the marketplace, and outright dismissing them is shortsighted.

His department is developing a culture of placing higher value on patient satisfaction surveys and celebrating when radiologists or staff are called out by a patient for a job well done. “We’re moving into an environment of fuzzier, squishier metrics like satisfaction,” he says. “The challenge for those of us trying to move metrics forward lies in getting people comfortable with that while knowing that some patients are permanently disgruntled and will not give anyone a good score.”

Surveying patients and other groups electronically might provide bigger sample sizes with quicker, more accurate and instructive results. But the true need is for real-time data. “That won’t help us monumentally in measuring what they think, but it will allow us to iterate better and faster,” Duszak says. “We can leverage emerging survey technology to help us do better—and score better.”

Peer Reviews on the Rise

Along with the wave of increasing patient feedback is an increase in peer reviews in terms of frequency and playing a more significant role in managing data related to outcomes. Peer reviews should remain as anonymous as possible, according to Kaplan, and providers should develop a scoring system for whether one agrees or disagrees with the physician’s interpretation.

“We can do this easier now, so we’re trying to peer review as many as 5 percent of radiology studies,” Kaplan says. “We also look at sub-specialization rates. Certain studies show quality improvement when read by a subspecialist, such as pediatrics, mammography or musculoskeletal MRI examinations.”

David Larson, MD, MBA, associate chair of performance improvement for the department of radiology at the Stanford University Medical Center in Stanford, Calif., thinks the peer reviewing process still needs considerable improvement.

While we collect plenty of data through peer review, can that data truly be trusted for providing accurate information?

“The main problem is that we’re asking professionals to judge and grade their peers, knowing full well the potential consequences,” Larson says. “That knowledge introduces so much bias that peer reviews become virtually useless as a metric. If you’re using peer reviews to identify individuals who need remediation, that colors the process so heavily.”

If the objective for peer reviews is just to learn from mistakes instead of seeking to punish those who may or may not have committed mistakes, Larson adds, people can may “let their guard down” more. 

Tracking Technologist Data

Tracking and measuring data from technologists also can have a significant impact on patient care. For a recent Journal of the American College of Radiology study, Timothy P. Szczykutowicz, PhD, and colleagues studied one year’s worth of technologist data to help them identify areas of improvement (J Am Coll Radiol. 2017 Feb;14(2):224-230).

Szczykutowicz, an assistant professor of radiology, medical physics, and biomedical physics at the University of Wisconsin-Madison, says that while turnaround time has historically been one of the major metrics for evaluation, other factors such as radiation dose and image quality are gaining importance.

“There’s a shift to evaluating the quality of the examination,” he says. “Looking at the radiation dose information is an indirect way of looking at quality; looking at image quality is a direct way. Or we can see where they deviated—and whether the deviation might have been a good thing after all.”

Szczykutowicz says that overall, his team was able to see that their efforts to get all the technologists on the same page had been a success. So tracking data can help drive change, but it can also help providers know when to not make a change at all.

“This was a confirming study for us,” Szczykutowicz says. “We encourage our technologists to not make changes in procedures, and incorrect changes were down to 0.03 percent, which we were happy with.”

Szczykutowicz adds that tracking and managing data also is at the center of the university’s efforts to put more focus on specific quality metrics such contrast media.

 “We’re also looking at contrast, and how well it was administered,” he says. “We want to compare apples to apples, but if there was a difference in imaging quality due to contrast administration and scan timing, the comparison won’t be as useful.”

Repeat exams is another variable the group is tracking, which is closely tied to dose. “People are not going to write down every time there’s a repeat; that’s human nature,” he says. “If you’re on the CT scanner and the technologist scans a few more times than he should have, that’s important information and should be monitored.”

Joseph Dobrian,

Contributor

Around the web

The patient, who was being cared for in the ICU, was not accompanied or monitored by nursing staff during his exam, despite being sedated.

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.