Productivity Tracking for Radiologists

Because there are not enough radiologists available to meet demand, practices must learn to make the most of radiologists’ available time, according to a paper presented in Chicago at RSNA 2008: Personal Learning in the Global Community. On December 3, Tracking Physician Productivity: Is It Necessary and How Should It Be Done? was presented by Fred Gaschen, MBA, CHE, executive vice president, Radiological Associates of Sacramento, Calif; Stephen Chan, MD, assistant professor of radiology at Columbia University, New York; and Richard Duszak, Jr, MD, a diagnostic and interventional radiologist with Mid-South Imaging, Memphis, Tenn. The need for efficiency in radiology, they note, is only going to grow, primarily because the need for radiologists will continue to increase. Population growth, the ongoing introduction of new imaging technologies (with broader applications for established technologies), and the aging of the population are some of the factors indicating that demand for radiologists will continue to outstrip supply for the foreseeable future. Reasons to Measure The process of measuring productivity is typically undertaken, the presenters say, because radiology practices need some way to compare their output, either as an average (compared with other practices) or among the radiologists within the practice. Some practices may need comparative productivity figures in order to justify recruiting more radiologists to handle the existing or projected workload; for example, showing that the practice’s radiologists are already more productive than expected strengthens the case that adding radiologists would be better than increasing each radiologist’s current workload. Some groups might simply need a way to track their productivity improvements over time, partially to reassure the radiologists that they are performing at a high level, but also to monitor the effects, over time, of changes in operational methods or information technologies that can enhance or reduce radiologists’ productivity. Other practices may want access to individual productivity figures so that they can push underproducing partners to put in more hours, boost their efficiency, or make greater efforts to increase their productivity. Of course, measuring and comparing productivity are not simple tasks. Radiologists work under circumstances that can vary widely by setting, practice type, subspecialty, patient population, information and imaging technologies in use, customer expectations, and even regional lifestyles. If identical circumstances exist for comparison, productivity still changes over time, so comparisons may no longer be valid if a few years have passed. Over the past several years, for example, many radiology practices and individual radiologists have responded to financial pressures by improving their productivity levels. Quantifying Work The presenters identify four primary measurement sets that can serve to quantify work, as a starting point in productivity assessment. These sets cover procedures, revenue, time, and RVUs, and practices have used them singly or in various combinations. Measuring procedures is direct and fairly straightforward, and correlates well with services actually provided. CPT® codes are relatively easy to use for this purpose. There are a few drawbacks, however, to using this method alone. It presumes that all services are equal in complexity, thereby giving radiologists an incentive to boost their productivity ratings by concentrating on low-complexity procedures such as reading chest radiographs. Clearly, this work is not the equivalent of stereotactic breast biopsy or interventional radiology, the presenters note. The most easily measured indicator of productivity is revenue generated. Dollars show a radiologist’s financial effect on the practice directly, but the difference between gross charges and net revenues must be evaluated carefully in order to avoid mistaken impressions. Conducting studies for patients with better insurance coverage may produce more dollars without representing higher actual productivity, for example. Relying on revenue measurement alone can also create an incentive to focus on expensive procedures; the presenters say that radiologists in a practice that tracks only dollars could leave for last the same stacks of chest radiographs that would be first to be completed in a practice that measures only total procedures. Time is readily measured as hours worked, but the amount of time spent at a job is not always (or even often) the amount of time spent working. Productive and nonproductive time are not separately measurable simply by using total hours worked, the presenters say, so measuring time may have no effect on how much a practice or individual radiologist actually accomplishes. While RVUs are more difficult to understand and measure, the presenters point out that this method can produce a realistic indication of the value of physician services, if the components of each service have been measured correctly. Of course, the accuracy of the RBRVS in assigning value to the resources expended in providing care can be questioned, and some modifications may be needed to compensate for individual circumstances. Professional RVUs consist of expense, malpractice, and work units. The work RVU indicates what the physician contributes to the patient’s care¹ and takes into consideration the time required to complete a given procedure, the technical skills required, and the amount of effort (both physical and mental) that must be expended. The presenters call the work RVU the best available measure of clinical work. Once RVU information has been compiled, it can be combined with other measurements to improve the accuracy of productivity assessment. In cases where the work RVU is considered to reflect the time component imperfectly, for example, a separate multiplier can be assigned, according to procedure type, to make the productivity evaluation more realistic.² Similarly, the practice might use a combination of gross charges and RVUs to create a better productivity measure for its own circumstances. Using Benchmarks Unless the practice is only interested in tracking changes in its productivity over time, it may want to find benchmarks that it can use to compare its own productivity with that of similar practices. By finding appropriate external benchmarks, it can use its internal productivity data to determine not only whether it is doing better or worse than it was earlier, but also whether it is doing well or poorly for a practice of its type. Unfortunately, it is likely to be difficult to find reliable external benchmarks that apply completely to a practice’s own circumstances. Published benchmarks are available from several sources, including academic practices, multispecialty clinics, mixed practices, and the Medical Group Management Association. One set of benchmarks³ derived from 312 radiologists at 21 multispecialty clinics showed an average annual productivity of 11,559 exams and 6,090 work RVUs per radiologist. Another study4 of 743 radiologists at 20 academic centers indicated per-radiologist averages of 7,156 exams and 4,458 work RVUs per year. An ACR survey5 of 411 practices yielded annual averages of 12,800 exams per radiologist. The same investigators6 found that academic centers had the lowest productivity per FTE radiologist per year, with a mean of 9,900 exams and a median of 9,000 exams. Private radiology groups showed the highest productivity, with a mean of 15,200 and a median of 14,900 exams. When results for practices of all kinds were combined, the mean was 13,900 and the median was 13,400 exams. The range of variation in productivity among practices was large, with 50% more procedures performed at the 75th percentile than at the 25th, and the presenters say that benchmark figures are quickly becoming outdated because of industry-wide productivity increases. Between 1992 and 2003, for example, overall productivity among radiologists increased about 22%.6 Improving Efficiency Flawless productivity comparisons with external benchmarks may not be achievable, but there are still valid reasons to track radiologists’ productivity, according to Gaschen, Chan, and Duszak. Internal productivity tracking is a useful management tool; although most practices probably know whether there is a radiologist who is not doing a fair share of the work, internal comparisons that quantify this problem can be a step toward addressing it. Internal productivity records will also permit the detection of trends and provide data supporting the need for more staff. In the presence of PACS, productivity tracking may show where and how the system should be use to redistribute radiologists’ workloads more effectively. Over the long term, practices may also see a Hawthorne effect, in which productivity improves simply because it is being measured. Internal productivity information should be normalized before it is shared, the presenters say. The RVUs or procedures per day should be adjusted to remove radiologists’ administrative or practice-building hours; past RVU values should be converted to those now in use; and days should be defined according to whether the assessment is covering a normal 9-hour weekday, weekend work, or a 12-hour night-coverage shift. The presenters also caution practices to determine what they want to achieve before they share productivity data with radiologists. Information meant to create an incentive for higher productivity will be handled differently from data intended to punish low productivity. Data-sharing methods may range from monthly open meetings to group or individual letters and email messages to blinded individual reports. Careful thought should also be given to the effects that productivity-boosting approaches can have on the practice, especially where financial rewards are to be applied. Berlin7 has described a liability case (involving a seven-figure settlement) in which the defending radiologist was considered to have missed a case of breast cancer because he had read 162 studies on that day. This radiologist received an annual bonus based on the number of studies that he interpreted. Care is clearly needed in creating penalties and incentives, but practices can still benefit. from productivity tracking. Because every practice is unique, the presenters conclude, internally developed benchmarks are likely to be more helpful, overall, than external benchmarks that cannot keep pace with the changing field of radiology.
Kris Kyes,

Contributor

Around the web

The patient, who was being cared for in the ICU, was not accompanied or monitored by nursing staff during his exam, despite being sedated.

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.