Effective Quality Assurance: Obstacles and Pointers
As the radiology marketplace matures, becoming increasingly competitive, it’s more important than ever for practices to differentiate themselves based on quality, according to Peter Franklin, MD, chair of radiology for Radisphere National Radiology Group, Cleveland, Ohio. “The bar has been significantly raised,” Franklin observes. “The referring physicians know there’s the potential for excellence out there, and they’re not only demanding it—they want proof that the radiology services they’re ordering are of the highest possible quality.”Franklin sees two primary forces at work in the marketplace that have created this demand for consistent, proven quality in radiology. The first is the proliferation of imaging in medicine, which he sees as a double-edged sword: as the specialty spreads, nonradiologist clinicians are both more reliant on and more familiar with good imaging.
“In the old days, the clinicians—the neurosurgeons, the orthopedic surgeons, the cardiologists—weren’t very sophisticated about imaging,” Franklin says. “Imaging was the domain of the radiology departments in hospitals, but the next generation of residents grew up with imaging. Orthopedists, for example, became accustomed to reviewing MRIs on all their patients. They certainly depended on radiologists for advice and consultation, but they became quite knowledgeable about what they were looking at.”
The other driving force identified by Franklin is the payor, since payors are increasingly looking at linking reimbursement to quality in radiology. “They might become the biggest factor,” he observes. “They’ve learned that a misinterpreted examination can have the potential for tremendous downstream costs. At the end of the day, these are the folks paying for the product, which is the radiology report, and they will prefer to pay more for a quality interpretation than to pay the going rate for one that is misinterpreted. This sudden rush to excellence is a defensive maneuver on the part of many radiologists, and they have good reason to react that way.”Barriers to QualityEstablishing an effective quality-assurance (QA) and peer-review process can be tricky, however. One obstacle that any practice will encounter immediately is cost—especially in a time when radiology groups are already fighting declining reimbursement. “It’s expensive,” Franklin says. “You have to pay radiologists to review studies that have already been interpreted, blinded to the original report. Two reports need to be generated, but the insurer only pays you once.”
Another problem, especially for smaller groups, is the issue of loyalty among team members. Franklin offers the example of a community-based hospital with fewer than 10 radiologists on staff. “In a situation like that,” he says, “you don’t want to corner a fellow radiologist and tag him or her as an error maker. What happens (more often than not, unfortunately) is that there is a QA process in place, but for the most part, it’s just a formality. This is not an issue for Radisphere, as the radiologists are distributed all over the country; the pressure to hide a deficient radiologist is mitigated.”
Franklin notes, however, that these issues can and should be surmounted by radiology groups aiming to stay competitive in their markets. As far as the expense of implementing a peer-review program goes, he suggests that practices look at it as just another cost of doing business. “If you want your business model to be sound, reproducible, and beyond reproach,” he says, “you have to pay for it. It’s no different than the other investments a practice might make to improve quality, such as upgraded computers, software or high-resolution monitors.”
Franklin also recommends an approach to peer review that is educational rather than punitive or, worse yet, inconsequential. “We all make errors,” he says. “Radiologists shouldn’t have to feel defensive about that. Instead, they should strive to improve at what they do. Sooner or later, we all get dinged, and the best approach is to learn from that and improve. That’s the difference between peer review as a formality and a highly effective peer review program.”QA That WorksFranklin explains that one of Radisphere’s founding tenets was a commitment to quality, which the group reinforces by sending its clients its error rate on a quarterly basis. “We don’t sweep our mistakes under the rug,” he says. “We share them with our clients, and we’re proud of it. Our peer review process is more robust than that of any of the hospital’s other departments.”
That process involves randomly assigning radiologists to double-read other radiologists’ studies, then following up in the event of a discrepancy between the two reports. If a client brings a finding that it’s skeptical about to Radisphere, two radiologists are assigned to read the study and determine whether they agree that there is an error of clinical significance. A peer review committee meets to render a final determination of what happened.
“For example, if there’s a potential error on a CT of the inner ear, two neuroradiologists will independently interpret the examination, generating a full report while being blinded from the reason for the review,” Franklin says. “You’re evaluated by your peers, as you should be, to level the playing field.” The frequency with which radiologists are evaluated depends on their case mix—for instance, those serving hospital clients are audited more aggressively because the acuteness of patients’ conditions is higher in the hospital environment.
Franklin stresses that in most cases, the response to an error is education for the radiologist, not punitive measures. “We have defined standards for what constitutes an acceptable error rate, and if we have someone who’s exceeding that and not responding to education, then we’ll take further steps, including a more stringent and focused review,” he says. “We’re very fair to the radiologists, but at the end of the day, the first responsibility has to be to the patient.”
Franklin says an investment in good quality assurance and peer review processes will deliver a big return, especially with the marketplace’s increased focus on quality. “In the near future, I think there is going to be more and more demand on the part of referring clinicians, administrators, and payors to prove how accurate one’s group is,” he says. “There are going to be companies formed just to manage this, and I think that would be a very positive development for radiology. It will identify those radiologists who should avoid reading certain types of studies and allow the specialists and the radiologists who excel at what they’re focused on to get more studies and volume in return.”
He concludes, “We welcome that with open arms. Everybody should embrace, and not fear, the process. If you’re afraid of it, maybe you need to ask why.”Cat Vasko is editor of ImagingBiz.com and associate editor of Radiology Business Journal.