The Data-driven Practice: How Quality Metrics and Data Are Driving Radiology
Turnaround time, step aside: A new generation of quality metrics is paving the way toward value-based radiology
Healthcare reform continues to impact nearly every—if not all—aspect of radiology practice. One of the most significant changes is a transition from a volume-based, fee-for-service model contingent on RVU production to one that is based on the delivery of value-added, quality care. Such factors are indeed leading to the formation of intricate, wide-reaching quality agendas and infrastructures under whose terms myriad quality measures are tracked and quality targets are carefully chosen and prioritized, with data used to assess progress.
“There’s no getting around the need for quality infrastructure and metrics to prove value,” says Steven Miles, MD, FACR, president and chief quality officer of Halifax Health, Daytona Beach, Fla., which provides a continuum of healthcare services through a network that encompasses two hospitals, four cancer treatment centers, the area’s largest hospice organization, psychiatric services and a preferred provider organization. Halifax Health also is one of six independent radiology practices that make up the core of Preferred Radiology Alliance (PRA), Daytona Beach, Fla., a managed services organization (MSO) that spans 124 physicians, 21 hospitals and 15 outpatient imaging centers.
In the new healthcare paradigm, Miles asserts, “nobody is going to be paying per click. Data that illustrate value are going to be just as important as the bill.”
Lasting, cultural change
For Radiology Associates of Canton (RAC), Canton, Ohio, the shift Miles describes has occurred under a co-management agreement for the radiology service line. Forged four years ago with Canton-based Aultman Hospital, to which RAC has long provided subspecialized imaging services and 24/7 on-site night coverage, the agreement calls for a governance model wherein all departmental decisions are made jointly by RAC’s radiologists and the hospital administration, and for the approval of capital purchases by both entities. More importantly, it mandates a quality infrastructure wherein clinicians are paid on a performance basis and a patient-centered radiology program that harnesses radiology clinical coordinators to expedite care and decrease length of stay (LOS) for certain patients.
“The co-management agreement came about largely because of pressures brought on by the volume-driven marketplace,” states RAC President Syed Zaidi, MD. Zaidi, who also serves as CEO of a consulting and management entity that assists hospitals and other radiology practices in developing similar partnerships, notes that before the agreement was solidified, RAC was grappling with unrelenting reimbursement cuts, internal disagreements about focusing on increasing volume as opposed to offering consultative services to its clinician and wavering hospital relationships despite a high level of volume-driven service. A consulting firm had been brought in by the hospital to evaluate whether RAC—whose contract was up for renewal the following year—could be displaced in favor of improved radiologic services.
“We proposed co-management, and the hospital administration bought into it because they liked the philosophy behind it, which is to create a framework that supports lasting cultural change adopted by both clinicians and administrators,” Zaidi notes. “Aligned incentives connected to quality and efficiency improvement initiatives drive that change.”
Within the new quality- and value-oriented model stipulated in the agreement, pay-for-performance is based on metrics to which fair-market value is tied and that were chosen with the intent to measure individual and collective progress, trends and RAC radiologist effectiveness. “Some of the metrics—like inpatient and emergency department turnaround time—are convenient and easily measurable,” Zaidi says
RAC was fortunate in that a precedent for co-management had already been set at Aultman with the deployment of such a model for employed cardiologists and independent oncologists. “However, we go beyond that because the whole purpose here is to measure meaningful data and build a better platform for improving the caliber of imaging services and [foster] engagement with referring physicians,” he notes.
Accordingly, process metrics, such as helping the hospital to comply with Joint Commission standards for completing histories and physical examinations before patients undergo minimally invasive procedures, are tracked. So, too, are metrics linked to clinical outcomes, e.g., CT biopsy accuracy, mammography recall rate, inferior vena cava filter retrievals, and lung biopsy adequacy (or accuracy) rate.
Setting the quality agenda
Meanwhile, at Beth Israel Deaconess Medical Center in Boston, the overall quality agenda is now set by the enterprise and the ACO and Alternate Quality Contracts under whose terms the radiology department functions, as well as by an annual operating plan linked to the hospital’s missions. In the radiology department, the quality improvement agenda is derived in part from customer feedback, referring physician surveys, root cause analyses of sentinel events and section performance quality improvement (PQI) projects, explains Jonathan B. Kruskal, MD, PhD, department chair and professor of radiology at Harvard Medical School.
A Quality Assurance committee manages all regulatory requirements, compliance with National Patient Safety Goals (NPSGs) and peer review, section PQI projects, trainee quality education, on-boarding and staff/patient safety portfolios. A separate committee drives a performance improvement portfolio that focuses on human-factors engineering and safety culture, big data/informatics group management, change management, outcomes metrics, peer learning/feedback and quality research efforts.
This is a very different scenario from five years ago, when the department’s quality-improvement efforts were primarily reactive and targeted the very low hurdles of older regulatory requirements, Kruskal says. The department tracks and manages common process and operational metrics, “but more interesting…are the outcomes metrics we use to show our hospital and referring physicians just how we can help them and what our contributions to care are,” he explains.
Five major metrics groups employed in keeping with the quality infrastructure encompass report recommendations (to show how they can be standardized and minimized); results communication (compliance with accepted policy as well as documenting required elements in accordance with Joint Commission requirements); responses to customer surveys (in the hospital’s case, to minimize errors in approved reports, increase radiologist availability, and expand local in-house attending service coverage); total inpatient turnaround time and procedure outcomes—including the need to re-biopsy and readmit patients, thereby prolonging their LOS.
“We collect many more for research purposes and will implement them once their value-add has been shown,” Kruskal states. Among metrics currently under review are staff safety efforts, appropriateness and initiatives to facilitate operating-room throughput.
Kruskal says Beth Israel Deaconess is, as behooves all radiologists and imaging service providers, indeed beginning to implement “value-based metrics,” necessary for practitioners and entities to remain relevant in their healthcare role. Such metrics (see The Value Metric Matrix) transcend internal process management, have a more global focus on population health and are disease-specific. They also measure how radiology impacts each patient’s health status and process of recovery following each episode of care, as well as how imaging sustains the health of every patient after any given episode of care.
Engaging the frontline
Johns Hopkins Medicine (JHM) in Baltimore, operator of six academic and community hospitals, four suburban healthcare and surgical centers and 39 primary and specialty care outpatient sites, has taken steps to create a culture of quality that hinges on employee engagement. Quality has been a central part of the management strategy in the department of radiology and radiological sciences under the leadership of Jonathan Lewin, MD, according to Paul Nagy, PhD, FSIIM, associate professor, Johns Hopkins School of Medicine, department of radiology and radiological sciences, and executive director, Johns Hopkins Technology Innovatiuon Center.
“Engaged employees are at the heart of the quality movement,” Nagy asserts. “Engaged employees provide safer care, better service and have a higher work satisfaction. We should be doing all we can to empower our employees to fix broken healthcare processes.”
Regularly scheduled communication among stakeholders at all levels (management and otherwise) as well as across all job functions (e.g., radiologists, technologists, administrators, transportation personnel) promotes this significant degree of engagement, Nagy reports. All participants are encouraged to share perspective without reservations, and management consistently assumes the stance that all contributions are valuable and worthy of nonjudgmental consideration.
Targets for improvement are identified and selected by those stakeholders who are closest to the problem; some come to light as a result of patient complaints. Sixteen cross-functional quality improvement teams composed of constituents from inside and outside the radiology discipline (e.g., anesthesia, internal medicine, etc.) are currently involved in radiology QI initiatives involving a wide swath of metrics, from order appropriateness and the effective use of a new order interface to protocol optimization. A quality leadership group provides resources for achieving these metrics, including assistance with access to project- and metrics-related data maintained on a Sharepoint server (e.g., data needed to perform root-cause analyses) that is leveraged as a collaboration tool.
Big role for data, tools
But infrastructure and metrics are only a piece of the puzzle. Data play a significant role in helping radiology practices and departments to measure the impact of quality monitoring and find ways to hit quality targets, with informatics tools serving to facilitate both easy access to information and less problematic data analysis.
Beth Israel Deaconess uses data depicted on and accessed through what Kruskal calls “managed dashboards” to maintain tabs on metrics; without viewing the data, the physician says, assessing how closely or even whether quality targets are being attained is impossible. A concerted effort is made to collect only data that will help drive improved performance; “simply collecting data for the sake of doing so is an absolute waste of time and resources,” Kruskal emphasizes. He believes the simplicity of the dashboards is the best predictor for data utilization, noting that “the less sophisticated [the tools], the better the chance of colleagues embracing the process.”
RAC employs data-mining software to pinpoint data that pave the way for evaluating the outcome of imaging studies. Combined with claims data, this information is being leveraged to educate physicians about how their utilization patterns and outcomes stack up against those of other clinicians. The end-result is more appropriate imaging utilization and lasting changes in that regard.
Zaidi states: “Measuring performance metrics has made it possible to demonstrate that the radiology department’s efficiency and quality are, in fact, improved—results we report directly to hospital administrators. It has had a very positive effect on our relationship with the hospital and created an entrée to alliances (with other hospitals), as well as improved our position in discussions about future payment models and shared savings reimbursements—and allowed radiology to assume a leadership role in utilization management and imaging education.”
Additionally, to expedite and support enhancements in patient-centered care, reduce LOS and decrease costs, two care coordinators actively follow up on inpatient and outpatient cases alike by actively mining patient data. “We’ve made great strides in this area,” Zaidi states. He cites as an example a drastic reduction in the interval from initial imaging to CT-guided biopsy of inpatients by three full days in 2014 compared to 2013, cutting LOS and costs. Similarly, IVC filter retrieval rates rose by 26% from 2013 to 2014—a major achievement given FDA recommendations that such filters not permanently remain in patients’ bodies unless medically necessary, according to Zaidi.
At JHM, data have a pivotal and critical part not only in measuring quality monitoring but in devising means of solving problems related to quality. Recently, a team used various data to complete a protocol optimization project pertaining to improved coordination of services for complicated procedures (e.g., pediatric MRI requiring anesthesia).
The Preferred Radiology Alliance takes much the same approach. Constituents from each group were asked what quality metrics they assess, and a list of approximately six commonly used measures was culled. Slated for expansion, the roster includes turnaround time, critical values, recidivism, radiology dose implications, and LOS (for a few different procedure codes), among others. Data are reviewed each month by a quality committee.
“Without looking at hard data, it is impossible for large clinical [alliances] to prove value and break the RVU mold,” Miles says.
Thoughtful and methodical
No matter what data are collected and how they are accessed, thoughtful, methodical use and “marketing” of data is imperative if metrics are to be properly applied to achieving and demonstrating quality in radiology practice. For example, according to Kruskal, collecting dose data that falls within a benchmark, is all very well, but only if the benchmark is appropriate.
Similarly, stating that peer review data falls within the benchmark is “appalling” in his estimation. “[It] means very little given that the data represents under-reporting,” he says. “Errors and their impact are under-reported, and since most radiologist data falls within these numbers, it appears that everybody is doing a fine diagnostic job. Current peer review data shows diagnostic discrepancy rates of 2% to 3% and everybody falls within this number due in part to under reporting. Evidence-based manuscripts suggest that discrepancy rates should be approximately 30%.”
What is driving a lot of the discussion now, Kruskal continues, is the implementation of merit-based incentive payment systems wherein radiologists will ultimately be measured based on the quality of services they provide, use of resources, clinical practice-improvement opportunities and contributions to meaningful use of health information technology. “For quality, while we might currently use PQRS [Physician Quality Reporting System] and other qualified clinical data registries, participation in registries doesn’t necessarily show value,” he emphasizes. “Managing data in that registry might add value, or showing how one makes efforts at improving quality to fall within a benchmark would also meet this goal.”
Preferred Radiology Alliance CEO Jeff Younger, MHA, FACHE, FRBMA, adds that rendering data useful also necessitates that detailed information be shared without reservation among constituents of entities like the Preferred Radiology Alliance, as well as between radiology departments in hospital networks. Often, he says, those who “own” data are reluctant to reveal them to others unless they have been aggregated or truly generalized.
Aggregated, generalized data are of little use in identifying areas of, and finding ways to, effect improvements and meet benchmarks, Younger has discovered. “If any of this is to work, there has to be a, ‘Here’s my data; let’s see yours’ type of approach,” he says.
Imperative, too, is the empowerment of frontline employees to harness data in the process of fostering quality patient care and care enhancement. Nagy advocates that radiology practices and hospital imaging departments provide five key resources to achieve this end.
Senior leadership support. This support is needed when front-line staff members cross organizational boundaries and/or are fighting political battles within the organization, both of which can impede the flow of critical information.
Six Sigma training for process improvement. Instruction in how to become data-driven within the context of a diverse group and a particular problem comprises a portion of the training. Nagy clarifies this principle using the example of an orthopedic surgeon who is frustrated by the amount of time that elapses from the moment a patient arrives at a hospital with a fracture until the moment the individual is sent to his department.
“Without Six Sigma training, many front-line employees would not be comfortable” mapping out the process and undertaking statistical analysis to find areas of system breakdown and find a way to make the most impactful improvements with the least amount of effort, Nagy claims.
Access to analytics, and IT support in applying them. With this support in place, front-line personnel better understand how to utilize analytics in their work rather than basing decisions on perceptions, emotions, and anecdotal outliers, Nagy notes. These factors interfere with proper benchmarking and distort decision-making, which in turn impacts quality, he observes, but analytics dispel the myths.
Community organizer. This individual promotes the engagement of clinical healthcare staff in process improvement. Data must be one of the linchpins of this effort.
Despite all of the work that has been done on the quality metrics and data fronts, sources agree that there is more to accomplish. As Kruskal puts it, “A lot of thought has been devoted to defining how radiology can use value-based metrics to benchmark performance and prove value, but it’s imperative to go further to measure and reflect the impact of imaging services on care delivery.
“Combining process and outcomes metrics is important,” Kruskal concludes. “They must act as targets for continuing to improve service quality—and value-based metrics are needed to assess and prove our contribution to overall patient care.”
The Value Metric Matrix
Several categories and sub-categories of value-based metrics are emerging, according to Jonathan B. Kruskal, MD, PhD, chairman of the Department of Radiology at Beth Israel Deaconess Medical Center in Boston and professor of Radiology, Harvard Medical School. Kruskal and co-author Ammar Sarwar, MD, outlined the basics in a recent article1 in the Journal of the American College of Radiology.
Health-status metrics
These metrics indicate a patient’s survival or degree of health. Radiologists might measure survival metrics as mortality (e.g., procedure-related mortality) or the interval from symptom onset to diagnosis of a life-threatening condition; and how timely communication of an accurate report impacted care in the trauma setting. The degree-of-health metrics sub-category might incorporate extent of pain relief following an image-guided analgesic procedure or the ability to resume regular activities after an imaging procedure has been performed.
Process-of-recovery metrics
Process-of-recovery metrics illustrate treatment dysfunction or time to recovery. Readmission rates, additional management requirements (e.g., medications, blood products and nurse visits) and/or repeat imaging rates facilitate the measurement of treatment dysfunction, Kruskal explains, with procedure complications, delays and adverse events falling into this category. Under the time-to-recovery umbrella come access to imaging services, report turnaround time (along with the myriad complex metrics that go into it) and time for effective communication of crucial results.
Length of stay (LOS) is a time-to-recovery metric, Kruskal observes, as are cumulative costs of disposables, cost savings per episode of care, evidence-based blood product utilization and timing/appropriate use of prophylactic antibiotics. Devising a radiology-specific LOS measure, he states, would help to highlight radiologists’ role in improving the time of inpatient recovery (i.e., the frequency with which a radiologic study or procedure contributed to LOS reduction).
Health-sustainability metrics
Metrics in this category center on the nature of recurrences and any long-term consequences of care, among them consequences stemming from errors of omission or commission. Kruskal cites helping to identify early recurrences (e.g., via oncologic imaging) and assisting in population health maintenance (e.g., through screening mammography) as examples of how radiologists assist in sustaining health between acute episodes of care.
Metrics for image-guided procedures
Two sub-categories of these metrics encompass process metrics (e.g., for a liver biopsy, the number of passes completed, the duration of time the procedure room was used, needle size, and compliance with the universal protocol) and outcomes metrics (like complication rate, percentage of diagnostic samples and pain relief). Cost-related metrics—among them impact on LOS and treatment selection and expenditures sparked by complications—also may come into play here.
Reference: Kruskal JB, Sarwar A. An introduction to basic quality metrics for practicing radiologists. J Am Coll Radiol. 2015;12(4):330-332.
Julie Ritzer Ross is a contributing writer for Radiology Business Journal.