Managing Quality and Priorities in a Value-based World
The past few years have seen an increased trend among hospitals to apply metrics as a means of bolstering performance. While hospitals naturally continue their quest to meet measures handed down by regulatory agencies and payors, the move toward value-based healthcare—coupled with the continued push to improve the caliber of patient care—is spurring some institutions to concentrate heavily on a wide variety of internally developed quality metrics.
“Compliance is important, but these types of metrics make a tremendous difference in how much quality and value we can deliver,” believes Alison Conway, MS, CNMT, RT(N), director of imaging and radiology safety officer, University of Maryland|Upper Chesapeake Health System. Conway oversees the radiology departments at two of the system’s 13 hospitals—UM Harford Memorial Hospital (HMH) in Havre de Grace, with 100 beds, and Upper Chesapeake Medical Center (UCMC) in Bel Air, with 200 beds. A respective 50,000 and 130,000 imaging exams are performed at these two hospitals each year.
UM metrics drive improvement
Both UM Harford Memorial Hospital and Upper Chesapeake Medical Center track “too many quality measures to count,” says Conway, noting that these are organized into several different “buckets”: clinical operations goals (including safety goals), quarterly performance improvement goals, and Daily Action for Success Huddles (DASHs). Organizational safety metrics also must be met, and each department drills down to their own metrics.
Conway cites as an example of clinical operations goals one of this year’s measured objectives: performing “the right test, for the right patient, at the right time,” with no “two-patient identifier incidents.” Proper positioning for pediatric chest x-rays exemplifies a process-improvement metric at the two hospitals, as does CT turnaround time to the emergency departments.
DASH goals and metrics change monthly or quarterly, depending on their nature. Examples of DASH metrics include whether a given team member left at the end of a shift, whether a particular exam was completed for final read when it was verified in the PACS, and whether an area was stocked for the day. The first metric is intended to track overtime, late add-ons cases and team member engagement; the second, whether all workflow steps—image capture, scanning, and verification—were taken.
“With the DASH goals, we track the number of occurrences, then use a Pareto chart to see why they occurred” and determine what needs to be done to address them, Conway explains. She adds that communication of measurements is extremely important, as it leads to cohesiveness within the department (e.g., all constituents united to attain goals) and makes it easier to demonstrate that the radiology department supports the hospital’s emphasis on quality care.
DASH results are displayed on dashboards within the radiology departments. Metrics are also discussed at monthly staff meetings and reported “up through the clinical operations division” at HMC and UCMC,” Conway says—including to an environmental care committee and a radiology safety committee.
Some metrics are finite; others are not. “If the measures relate to a problem that needs to be fixed, and it has been fixed, then we may (retire) it,” Conway states. “Sometimes one measure leads to another. But if it’s study repeat rates, we’ll always continue to measure those to prevent them from creeping up again.”
Langone’s clinical quality dashboard
For its part, NYU Langone Medical Center tracks 15 internally developed clinical measures broken down into three categories: operational, pure quality (process), and patient safety, with one individual responsible for each area. These are tracked using a dashboard developed by Michael P. Rechht, MD, Louis Marx Professor of Radiology, department of radiology and radiology department chair, and Danny C. Kim, MD, director of quality, department of radiology. The dashboard is centralized, so that all members of the department can access it without impediment.
At NYU Langone, clinical pathways are an area to which metrics are applied. For example, one pathway contains recommendations for following up on pulmonary lesions depending on their size, age and morphology. An algorithm embedded in reports serves as a guideline for this follow-up by referring physicians. A metric is then harnessed to see whether recommendations were adhered to—or not.
In another interesting twist, Recht says, some metrics are, in a quest to promote quality improvement on the clinical side, broken down into individual components. “Take report turnaround time from exam to delivery to the referring physician,” he elaborates. “There is the time between ordering the exam to the patient’s arrival in the department, from the start of the exam to the end of the exam, from the end of the exam to the PACS and from the PACS to the final report. By breaking down the metric this way, we can see, if the metric as a whole isn’t where it should be, why it isn’t as it should be. This would be difficult otherwise.”
At one particular juncture where this metric was “off,” a look at the different components of the measure brought to light the fact that, for a certain shift, the department was short-staffed by one technologist. A technologist was added to the team in a pilot program, and turnaround time rapidly increased. “It was not difficult, then, to convince the hospital administration that it was necessary to hire another full-time technologist to keep things moving in the department,” Recht states.
Raising the clinical bar
NYU Langone’s radiation department also raised metrics to a new level: one measure was recently used to assess the value of a decision-support tool for the management of incidental, asymptomatic ovarian cysts detected on ultrasound, CT and MRI. A group of gynecologic oncologists and abdominal radiologists formulated collaborative institutional recommendations for this task by modifying the published consensus recommendations developed by the Society of Radiologists in Ultrasound based on local practice patterns and their own experience.
Also undertaken was a less formal process that entailed circulating the published consensus recommendations along with suggestions for revising these recommendations and subsequent consensus. Radiology reports that included incidental ovarian cysts discovered in the 34 months prior to the introduction of the tool were retrospectively reviewed. For cysts detected on ultrasound, adherence rates to Society of Radiologists in Ultrasound recommendations were calculated for examinations before tool launch and compared with adherence rates to the collaborative institutional recommendations after tool launch.
Additionally, electronic medical records were reviewed to determine the follow-up chosen by the clinician. For cysts detected on ultrasound, the rate of radiologist adherence to recommendations improved from 50% (98 of 197) without the decision tool to 80% (111 of 139) with the decision tool. Over-management of these incidental findings (i.e., the recommendation of follow-up when it was not indicated or the use of follow-up at a shorter interval than indicated in the recommendations) decreased from 34% (67 of 197) without the decision-support tool to 10% (14 of 139) with the decision-support tool.
The team concluded that management recommendations developed through collaboration with clinicians may help to standardize the follow-up of ovarian cysts and reduce overutilization of imaging, contributing to the caliber of care and benefitting the bottom line. “We’re seeing that internal metrics can tell us a lot more than what may initially meet the eye,” Recht observes.
Linking quality to outcomes
Syed Zaidi, MD, corroborates Recht’s comments about the ever-increasing positive impact of internal metrics on quality and value, adding that comparable positive impact can be seen in the management of inpatient imaging procedures. Zaidi is president of Radiology Associates of Canton (RAC) in Canton, Ohio. Under terms of a shared-governance model, RAC manages the radiology department of 542-bed Aultman Hospital, also in Canton, and receives fair market value for meeting quality metrics.
“We have a mix of process and clinical metrics, but we are emphasizing the clinical side—metrics linked to clinical outcomes—much more heavily,” Zaidi says. Clinical outcomes support quality patient care and, hence, demonstrable value; in some cases, they have the potential to reduce length of stay and costs, Zaidi says.
Examples of clinical outcome-linked metrics implemented at Aultman include CT biopsy accuracy, inferior vena cava (IVC) filter retrieval rates, and improvements in interventional radiology services. Metrics intended to better manage inpatient imaging—for instance, hours from image diagnosis to CT-guided biopsy—are on the roster as well. Improvements in the latter have decreased the length of stay at Aultman by three days, Zaidi reports.
Each year, Zaidi and his colleagues select and introduce a new clinical metric. Some metrics are developed as a result of questions from clinicians or the identification of worrisome trends. The CT biopsy metric, for example, came about when a review of samples demonstrated that the accuracy of CT biopsies performed at Aultman Hospital should have been better.
The radiology department at the University of Maryland Medical Center in Baltimore, which like HCH and UCMC is part of the Maryland|Upper Chesapeake Health System, is also using internally developed metrics in part to keep a tight rein on inpatient imaging. “We drill down into multiple levels of why things are happening,” observes Penny Olivi, MBA, CRA, FAHRA, RT, senior administrator, radiology. One example is delays seen in performing MRIs on patients.
The department now measures the frequency of different factors that may cause such delays, ranging from patients not having received the proper sedation to their being too ill to be sent to the department for studies. “It’s all about peeling the onion, and controlling what we can,” Olivi says.
Measurement resources
Just as hospitals are devoting significant effort to developing extensive cadres of internal metrics, they also are following a highly methodical approach to identifying and addressing areas of measurement that support metrics application.
Stanford University Medical Center, Calif., which has about 400 beds and an annual imaging volume of about 300,000 studies, represents a case in point. The hospital’s radiology department harnesses a homegrown program known as Realizing Improvement Through Team Empowerment (RITE) to identify and execute quality improvement projects built around aligned metrics, notes David Larson, MD, associate chair, performance improvement and associate professor, Stanford University, Calif. (see Stanford’s Quality Quest).
Prospective quality improvement projects are submitted to a quality improvement team for review and subsequently prioritized based on such factors as likelihood to succeed and potential ROI. The highest priority is assigned to projects that can offer the most extensive benefits and greatest value, at the lowest cost.
Once a decision has been made to pursue a given project, a team of constituents is formed to address it. For instance, at one juncture, it was determined that the duration between stroke code and the performance of the necessary CT scan was excessive. Under the RITE umbrella, a group comprising representatives from the emergency, neurology and radiology departments, as well as from the transport team, was assigned to “look into what was happening and how to change the process by applying measures to monitor times,” Larson states.
Each group assigned to a project undergoes 10 two-hour sessions, held every two weeks. “We teach them how to measure processes, plot them graphically, and report performance, as well as how to recognize the difference between significant changes (when applying a metric) and mere noise in the variables,” Larson asserts. He and his colleagues “watch the project on the back end, documenting progress weekly and mobilizing behind the scenes to mitigate those difficulties that arise.”
On the support side, the University of Maryland Medical Center has implemented a Quality Academy intended to educate radiology constituents on the importance of quality care and how to use data and metrics to enhance it. “Everyone needs to attend six lectures and understand data, and to complete a PQI project in order to graduate,” Olivi explains.
A little help from IT
Meanwhile, for some hospitals, technology plays a role in identifying new metrics. This is true of Johns Hopkins Hospital in Baltimore, reports Paul Nagy, PhD, FSIIM, associate professor, Johns Hopkins School of Medicine, department of radiology and radiological sciences, and executive director, Johns Hopkins Technology Innovation Center.
“The hospital recently implemented a quality metric management system that pulls data from the RIS and allows that data to be ‘sliced and diced’ in order to quickly build metrics with run charts for multiple frontline quality improvement teams,” Nagy said in an email communication. “Each year, 14 to 16 of these teams, made up of technologists, nurses, radiologists, schedulers, patient transport personnel, and administrators, undertake projects aimed at upping the efficiency and service ante within the radiology department.
Moreover, NYU Langone Medical Center has put into place initiatives and tools intended to complement the use of metrics in supporting a high caliber of care and the delivery of value across all radiology specialties and subspecialties. As an example, the quality and usefulness of imaging reports to referring clinicians has been enhanced by the integration of actual images with prose; this makes it simpler for clinicians to draw conclusions from what they are reading, Recht asserts.
“There has been so much talk about overall quality improvements, but quality improvement also means (boosting) the quality of our product—radiology reports,” Recht says. “It can work in tandem with metrics.”
Clinicians also can initiate WebEx conversations with sub-specialists on call, during which they can ask questions about images and reports and even physically circle, on-screen, specific portions of images about which they wish to inquire.
Langone’s radiology department also has—with help from an outside vendor—created a database of cases with concordant outcomes. Clinicians can locate these cases in accordance with different metrics. In the first four months following the introduction of the databases, a total of more than 7,000 cases were reviewed.
“Too often, we look at individual cases for errors of omission or commission, but now we can look at a whole system to find aggregate cases,” Recht says. “This adds to the overall value of having metrics. It’s necessary to find an outside vendor to do a project like this one, but it’s very much worth it.” [would love to have one use case for this]
Although hospitals will continue to develop their own metrics as they strive to hit and surpass quality targets, some would like to see payors bring more measurements into the fray and then incentivize healthcare providers for attaining goals. “It would be nice, for example, to see more rewards for decreasing unnecessary costs from unnecessary imaging,” Zaidi concludes. “All around, metrics are a good thing.