Measuring Up: Integrating Performance Benchmarks Into the Practice

Within the past 10 years, the topic of physician-performance benchmarking has progressed from contentious to being one on which entire meetings in the radiology community are based. Some industry analysts say that this evolution has occurred because measuring physician performance has become a necessity, in today’s health-care environment, due to the many challenges (including financial difficulties) that health care is facing. Given reductions in reimbursement, practices need to find ways to be more productive, according to David Haws, CFO, Pueblo Radiology Medical Group, Santa Barbara, California. “Productivity benchmarking is one of several tools you have to look at to determine if your practice is becoming more efficient, standing still, or moving backward,” he says. When Pueblo Radiology Medical Group first implemented its benchmarking program three years ago, it experienced an immediate 10% increase in physician productivity, Haws reports. That number has since leveled off, but the program still serves as a tool to assess the skills and capabilities of each radiologist. Haws says, “I don’t know how a group can survive without understanding its level of productivity. It has to be measured to ensure that each physician is benefiting the group.” Even though leaders at Pueblo Radiology Medical Group don’t stress clinical productivity as what Haws calls the end-all, be-all element of their work, he says that it is important that the radiologists know how they compare with their peers, how they can improve, and where they are succeeding (compared with other group members). The medical profession is enduring considerable change, and as government becomes more involved in health care, advancing and adapting are critical to success. “A progressive program is about being willing to change and to find better ways to do what you’ve done in the past. It’s the only way to get ahead and stay ahead,” Haws says. Benchmarking is one of Pueblo Radiology Medical Group’s methods for increasing efficiency and improving performance; in addition, the practice uses a fully integrated PACS, report templates, voice recognition, and peer-review participation. Lawrence Harter, MD, FACR, practice president, notes that the group’s quarterly and annual benchmarking results, which measure only work RVUs, allow partners to make educated decisions about internal and external staffing. In addition, the quarterly performance reports measure the productivity of individual radiologists. This helps them align themselves better with their peers, but it also allows the group to assess its performance relative to both national and regional numbers. All the measurements present findings that allow the board to assess the group’s overall performance, including which improvements need to be made (and by whom). Haws says, “Benchmarking is about self-motivation and self-development. It is a way for physicians not only to set goals, but to reach them.” The numbers also provide important information that can be leveraged for marketing purposes. Pueblo Radiology Medical Group’s benchmarking reports don’t account for time spent building the practice, Harter says. Everyone in the group, however, is aware of who is performing that activity, and the time that it requires is taken into account if that individual’s numbers fall below the productivity threshold. Harter adds, though, “Most of our practice builders tend not to have productivity issues.” To ensure that quality isn’t suffering in the pursuit of productivity, there is a formal peer-review process that occurs annually for everyone except partners (who undergo peer review every other year), Harter says. In addition, a quality committee was created to oversee a review process for sentinel events. These involve either follow-up action by the quality committee or an individual meeting between a senior physician and the physician involved in the incident. Unintended ConsequencesIn academic settings, performance benchmarking provides some of the same advantages seen in group practice, but it also provides an avenue for radiologists to measure overall contributions, including academic endeavors. In addition, it helps physicians understand the different components involved in the team environment, Paul Nagy, PhD, explains. Nagy is director of quality and informatics research and associate professor of radiology at the University of Maryland School of Medicine in Baltimore. He also is the coauthor of an article¹ that details the importance of using nonclinical RVUs in academic settings. Radiology leaders at the University of Maryland created a method for measuring academic RVUs, so radiologists are given credit for time spent on teaching and research. They also created a Web-based system that automatically calculates work RVUs, measuring radiologists’ clinical productivity by comparing the numbers, by subspecialty, with benchmarks established at other academic institutions. The results are used in performance appraisals, and radiologists have access to the data throughout the year. The department’s IT warehouse system collects productivity and performance data (updated daily) from the RIS. The system has the added benefit of ensuring that each faculty member’s current CV is available to decision makers in the department using CV Manager, which allows online editing and has the ability to collate benchmarking numbers based on specific requests. Nagy built the system with an Apache Web-based server, used in conjunction with a MySQL® database. The PHP programming language is used to generate the Web pages. In addition to work RVUs, the department uses academic RVUs to measure such contributions as writing articles for publication, teaching, and grant-related activities. The department takes the measurement a step further by weighing the importance of each contribution, including whether an article is published in a journal having high or low impact. The Web-based system, Nagy says, “has helped us become a lot more relevant in our decision making.” There is little dispute that performance benchmarks serve a valuable purpose—or that, used properly, they can benefit radiologists in both private and academic practices. Many industry experts appear to agree, however, that some danger lies in creating a culture that focuses strictly on clinical productivity. Richard Duszak, MD, FACR, FSIR, is a diagnostic and interventional radiologist practicing at Mid-South Imaging & Therapeutics (Memphis, Tennessee), and is the lead author of a two-part series²,³ on benchmarking models. “Radiologists are like any other human beings—they will modify their behavior based on metrics,” he says. “If all a group is measuring is quantity, then that’s what it will get, at the cost of relationships and patient satisfaction.” Duszak and Muroff use the term unintended consequences to refer to the detrimental effects that can occur in association with a flawed performance-benchmarking program. When a practice’s success is based on volume, some radiologists are no longer willing to respond to questions from patients or hospital administrators because they are trying to reach their productivity numbers. Duszak says, “When you stop providing those kinds of services, then the practice’s contract with the hospital is at stake.” The kind of behavior that a group or academic center wants to reinforce is something to take into consideration when developing a physician-productivity model, Duszak notes. One of the biggest challenges is performance evaluation based on fee-for-service reimbursement. Under such a system, a physician will sometimes do what earns payment, even though it might not be in the best interests of the patient, Duszak says. For example, without stopping to question the patient’s referring physician, a radiologist might repeat a test that the same patient underwent a week earlier. A byproduct of reinforcing clinical productivity is that radiologists who choose only cases that are quickly read can increase their work RVUs through such cherry-picking, Duszak adds. Harter agrees that such a practice isn’t in the best interests of the organization, and that everyone should do his or her fair share of the work. This is why Pueblo Radiology Medical Group has preventive measures in place to ensure the equality of workloads. In the interest of maintaining fair numbers, RVUs are measured by clinical workdays, with time allotted for administrative work. Harter acknowledges, however, that there is no way to measure productivity for time spent in consultation between radiologists who are both within the group. Neglecting to measure the nonclinical contributions of radiologists sends a dangerous message, Nagy notes. “If you don’t measure it, you don’t value it is the message being sent to employees,” he says. That culture is especially detrimental to academic radiology departments, where research and teaching can be as critical to success as clinical productivity is. In addition, the clinical-productivity numbers don’t always relay a radiologist’s true working day because many radiologists spend time training and working with residents. “Radiologists want to expect certain behaviors and motivate those behaviors, and want to improve the bottom line, but it’s a careful mix,” Nagy says. He has developed a prototype radiologist report card (Figure 1) that takes a holistic approach to measuring the academic radiologist’s performance. Drawing values from disparate systems, the reports can be built manually using Microsoft® Excel®. “It would not be difficult to build this into an integrated report-generating system, but we don’t do it right now,” he says.Strategic MeasurementCreating a successful, comprehensive performance-benchmarking program isn’t impossible, but it takes forethought, time, and education. Though productivity is one aspect that should be measured, its measurement isn’t the only tool to be used when assessing performance. Duszak says, “The first thing you should do is step back and ask, ‘Why do we want to do this?’” By asking that question, a group might find that performance measurement isn’t the answer, and that the issue in question might be, for example, a personnel problem, he adds. It’s also important to look at the areas that are the most vital to the success of the group or department, such as quality, patient safety, and relations with hospital administration. “Deciding what to measure ultimately should be based on what is important to the practice,” Duszak says. In Duszak and Muroff’s article³ on moving beyond the numbers, five questions are recommended as starting points for practices and centers that want to implement a physician-performance model: Should we measure it? What should we measure? How should we measure it? How should we manage it? How should we modify it?Once the decision has been made to create a program, many supporting resources are available. Groups with existing programs can use the same resources to discover better ways to use their reports. Pueblo Radiology Medical Group’s leaders attended practice-leader meetings and read articles to educate themselves about best practices for implementing a physician-performance model, Harter says. The ACR® also served as a valuable resource when the group was implementing its benchmarking program. Often recommended as other useful steps are exploring the resources of the Association of Administrators in Academic Radiology, speaking with similar groups that have successfully implemented benchmarking programs, and conferring with one’s billing company/department to discuss the kinds of reports that can be generated. Harter warns other organizations that Pueblo Radiology Medical Group initially used more than one source for its data, and it experienced inconsistencies as a result. He adds that the group’s benchmarking program is a work in progress. For example, performance reports did not include physicians’ names when they were initially published for group review, but names were later included as an experiment. The reports have since reverted to being anonymous, at the request of some of the radiologists who were uncomfortable with using names; tweaking the system is necessary to ensure that a group’s benchmarking program is as effective as possible. Beyond Productivity Regulators and payors have created additional impetus for performance measurements that go beyond quantity to include qualitative measures. Developing databases to help practices meet these requirements is an ongoing ACR initiative. The college has developed relevant databases that are available to members: the National Mammography Database, the CT Colonography Registry, and the General Radiology Improvement Database—all of which fall (with other databases) under the umbrella of the National Radiology Data Registry. In addition to the multiple databases available to members, the ACR has created educational sessions for those who want to learn about the intricacies of benchmarking. Judy Burleson, director of metrics in the ACR’s Quality and Safety Department, says that benchmarking has only started gaining momentum in the past 10 years; it wasn’t until around 2006 that the ACR began featuring educational sessions specific to the topic of benchmarking performance. “This year, we have really picked up steam to be able to provide resources to the radiology community,” she says. The ACR has made a point of providing resources at the regional, state, and national levels, and these resources include its RADPEER™ peer-review system. The change in educational focus is a response to the fact that radiology groups and academic centers recognize that increasingly tight budgets make efficiency critical to their survival. In addition, benchmarking reports allow physicians to provide accurate details for both evaluations and reports required by regulatory and accrediting agencies. The Physician Quality Reporting Initiative (PQRI) created by CMS doesn’t yet require physicians to provide reports, but there is an incentive payment of up to 2% of total estimated Medicare Part B fees for diagnostic radiologists who report on three of the four relevant measures designated for 2010. Several more measures pertain to interventional radiology. By 2015, radiologists will be among those required either to submit such reports or to accept penalties (which will gradually increase to 2% by 2016). The Joint Commission requires physician evaluations as part of its accreditation program. “Credentialing/privileging is a very important part of the accreditation process, and its Focused Professional Practice Evaluation and Ongoing Professional Practice Evaluation are both a part of that,” Robert Wise, MD, vice president of the Joint Commission’s Division of Standards and Survey Methods, explains. He says that benchmarking can be a valuable tool for radiologists because the numbers can indicate potential problems of which the physicians might not be aware. “It doesn’t mean that if you hit a certain number, you are doing anything wrong, or that you are a bad physician; it just means that there could be room for improvement,” he says. For example, monitoring how long it takes for different radiologists to perform fluoroscopy can determine whether someone is unintentionally overexposing patients to radiation, Wise says. Reviewing numbers and protocol is a good way both to improve performance and to ensure patient safety. Radiologists also are under scrutiny in light of reports that have appeared in the popular press of patients being overexposed to radiation. The result has been an FDA initiative to reduce patient exposure; the Joint Commission is working with radiology leaders to create screening criteria to ensure patient safety, Wise says. A Cultural Sea ChangeImprovements in efficiency and patient safety are positive outcomes of implementing a physician-performance model. One of the remaining challenges, however, is overcoming the resistance of those radiologists who don’t like the idea of being monitored. Radiologists are smart, intellectually curious people; telling them to do something isn’t a good approach because they need to buy into the program, Duszak says. They are naturally competitive and want to be the best at their jobs, in addition to earning the respect of their peers. A program will be more successful if physicians accept it, instead of merely being told to participate. “In the interest of maintaining quality, it has to be a grassroots effort,” Duszak says. He recommends educating radiologists about the changes that will take place upon the implementation of a benchmarking program. “They need to understand why the group is benchmarking and how it will be done,” he says. By gradually educating them over time, practices can encourage radiologists to be involved from the program’s inception and can make them more likely to embrace the change. If radiologists are made aware of the degree to which societal pressure and government involvement are forcing radiology groups and academic centers to be more efficient (and more accountable for their protocols), radiologists will have a better understanding of how important benchmarking is to their organizations. Explaining RVU measurements is another important aspect of this education. “The groups that are successful are the ones that have taken the time to educate their physicians,” Duszak says. In addition, Burleson recommends making sure that leaders are heavily involved in the decision-making process and that one person is chosen to spearhead the project. “Start small and look at what the group wants to do,” she says. Casting a new benchmarking program in a positive light by using it as a method of helping radiologists succeed—not as a monitoring device or as a way to punish people—is another way to ensure more willing participation, according to some analysts. It is important, however, for leaders to implement programs without disrupting cohesive groups. “Ultimately, benchmarking is part of the job description, and it contributes to an employee’s performance,” Duszak notes. Physicians need to be told what is expected of them and need to be held accountable for their performance. Duszak shares a graph (Figure 2) representing the results of a minimal-expectation system implemented by one practice; it shows that radiologists at lower performance levels increased their productivity to meet the expectations of the practice.“We need to get away from doing things to patients and start doing things for patients,” Duszak says. “I tell physicians that we need to do this now, or someone else will do it for us.” Erin Burke is a contributing writer for Radiology Business Journal.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.

Trimed Popup
Trimed Popup