When Your Quality Is Questioned: Answers From Frank Seidelmann, DO
Recent media coverage of a radiologist’s accusations of inaccuracy against Radisphere National Radiology Group, Cleveland, Ohio, returns the question of quality to the forefront of discussion in the radiology community. ImagingBiz spoke with Frank Seidelmann, DO, chief innovation officer and clinical director of neuroradiology for Radisphere, both about the accusations and about how quality can be quantified, defended, and improved across the profession.ImagingBiz: Accusations were leveled against Radisphere by a locum tenens radiologist you contracted with at a California hospital. What is your response? Seidelmann: I think this is best described by the phrase, bad news travels fast, good news doesn’t travel at all. We take quality assurance (QA) and peer review very seriously; we’re one of the few organizations that has a full time QA/peer-review staff. As radiologists, we undertake daily double-blinded reviews, which most radiology groups don’t do. It’s extremely time consuming, but we take it very seriously.
What is ironic is that just 60 days ago, we had a discussion, in this publication, of our rigorous QA procedures, and how it takes quite a few resources to provide those to clients. This peer-review process was in place at this hospital; it was very successful at identifying discrepancies that we found, on our own, through our randomized, blinded process, and it also included follow-ups on discrepancies identified by referring physicians. Those results show our initial rates at this hospital are well below what we believe to be industry norms.
It also shows that any local radiology group is vulnerable to these types of unsubstantiated charges, so we all need to be vigilant in adherence to documented quality policies in all of our practices. At Radisphere, we’re constantly making sure our product is of the highest quality. Our radiologists are measured and tracked on quality, and we’re transparent about it with our clients; we report back to them on a monthly and quarterly basis. We’re often asked how our numbers compare with those from the prior group, and we discover that there are no numbers from the prior group. ImagingBiz: Since these allegations were made in the press, what recourse do you have? Seidelmann: For patient- and hospital-confidentiality reasons, this really does not merit any public response. What is clear is that no formal complaint by this radiologist had ever been brought to the proper channels at the hospital prior to this article, and it is also clear that we have the full support of senior administration there. When you have this sort of public claim, you really dig in—even beyond our normal QA process—to investigate this thoroughly, and we cannot substantiate the statements that were made in any way. We can confidently say, after our thorough review, that our QA rates are outstanding, and no patients have been harmed.ImagingBiz: Not much has been written on error rates in the peer-reviewed radiology literature. How do you know with what to compare yourselves? What are the industry norms? Seidelmann: That’s the problem: there are limited research and articles out there on industry benchmarks. There are actually lots of studies, however, on interobserver rates of discrepancies for specific modalities (and body parts), the most documented modality is mammography. Discrepancy rates for different modalities and exams can be anywhere from 5% to 30%—or even higher, for more complex studies.
What really have not been clearly shown (or even researched, from what I can see) are the average discrepancy rates for radiologists for all modalities and all exams, because that is the model of general-radiology practice that most community hospitals are using today. There are two reasons for this: First, there is a general fear of public disclosure of discrepancies due to medicolegal concerns. Second, most studies are done at academic medical centers and do not reflect general practice across all modalities that a general radiologist has to read.
One thing we’ve found is that modality distributions in community hospitals are really pretty consistent: around 48% radiography, 20% CT, 12% ultrasound, 5% MRI, and so on. Meanwhile, extremely high discordance rates—up to 46%—have been shown between a general radiologist versus a subspecialist in CT and MRI studies. That’s why our approach is to emphasize subspecialization through our national group, and to use that expertise in as many exams as possible at a given client hospital. ImagingBiz: With quality expected to become a greater factor in reimbursement, how should radiology address the quality concerns of its various constituents (patients, referrers, payors, and hospitals)? Who comes first? Seidelmann: The question shouldn’t be who comes first, but what come first: data. Decisions in the future of medicine will be increasingly data driven, and if your practice is not measuring a number of different metrics of quality, then you will be at a disadvantage in the future, no matter what health care will look like down the road. In order to keep track of performance data, you need software systems that are integrated into the radiologist’s normal reading workflow.
Most peer-review systems are outboard applications that radiology groups ask their hospitals to buy for them and that are rarely used to their full extent. Our tools are custom engineered into our daily workflow and are not a distraction to the radiologist, so there’s much higher compliance and use—and, therefore, more accurate data about our performance. We feel every practice needs to know all of the answers about its own quality before someone ever comes to question what it is.
Only by measuring data can you measure the performance of individual radiologists and give them continuous feedback, resulting in continuous quality improvement. We don’t use data as a punitive measure—it’s all to improve the performance of our radiologists. If we do see a trend developing over time, we’re able to go ahead and say this particular radiologist does not have the best QA rate on CTs of the spine, so let’s not send him or her those anymore. We make QA more accountable in how we manage and distribute our resources, and we can do that because we have fractional subspecialists serving many hospitals. A local radiology practice has little flexibility to change its modality assignments if a QA issue happens to be found in one of its partners’ performance. ImagingBiz: How is Radisphere measuring quality? Seidelmann: Well, peer review is only one measure of quality. We treat quality as much more than a peer-review process. For example, other quality parameters we measure include compliance with critical-results reporting and frequency of report addenda and corrections. The most important element, in this regard, is transparency in the reporting of these data.
We share these data with our clients and keep an open dialogue with the medical staff because, in the end, close professional relationships help the clinical-care process. Our experience is that we quickly develop these relationships with clinicians, even though our subspecialists are located all across the country. We also provide all of these data to our radiologists, in a dashboard, so they can see how they are tracking with their peers in quality performance.ImagingBiz: How much does Radisphere devote to this effort, in terms of resources? Can a traditional radiology group afford to invest in QA systems and procedures?Seidelmann: Good quality programs are an expensive proposition in terms of both infrastructure (having the software and systems to track these parameters) and, more important, physician time (to perform the blind readings and participate in the adjudication process for discrepancies). Relying on client hospitals to buy all of these tools for your group will detract from productivity due to lack of integration, and is not where the industry is going in the future. Even small groups are going to have to carve out the time for QA analysis. If not, they won’t be able to survive. It’s our size that has allowed us to build out these processes, but I believe they are a necessary investment of which radiology groups have got to start taking ownership. ImagingBiz: Should the public be alarmed at radiologists’ error rates? What about fellow clinicians?Seidelmann: This is the unfortunate possible ramification of a lay press article that does not give a balanced perspective. Most studies identified in a good radiology QA process are not errors and mistakes, but rather discrepancies between what one radiologist, versus another, sees. That is why a blinded peer-review process is so much more rigorous than one based on looking at the results of the previous report. Although discrepancies should be tracked and investigated thoroughly, only rarely do they have an adverse patient impact.
Physicians, in general, don’t intend to make errors, but physicians are human, and errors will happen when you have humans making decisions. That’s why we believe so strongly in subspecialization with the safety net of a robust QA program. We’re constantly looking for something that might be amiss. Errors will happen, but I believe our approach minimizes them to the extent humanly possible.Cat Vasko is editor of ImagingBiz.com and associate editor of Radiology Business Journal.