Q&A: Mass General's Harvey on shifting from peer review to peer learning

Peer review has long been the industry norm for providing feedback to radiologists, but more and more academic and clinical departments are starting to implement a judgment-free alternative: peer learning.

Radiology Business recently spoke with H. Benjamin Harvey, MD, JD, director of quality improvement at Massachusetts General Hospital and assistant professor at Harvard Medical School, to discuss the industry’s shift from peer review to peer learning and share his own experience with peer learning.

Why is feedback so important in radiology?

H. Benjamin Harvey, MD, JD: As physicians, we care deeply about our patients. We are constantly striving to provide the highest quality of care just as we would want for our own loved ones. But as humans, we know that we are not infallible. We all have strengths and weaknesses and personal blind spots, which at times can lead to suboptimal care or even mistakes. In order to improve on our weaknesses and mitigate our blind spots, we must know that they exist. This is what makes feedback so critical.

Feedback, and in particular peer feedback, provides radiologists with an honest assessment of the quality of care that we are providing and empowers us to improve over time. What’s more, since as humans we are wired very similarly, one radiologist’s weakness or blind spot is often shared by others. As such, feedback in a peer learning or group setting allows us to learn and improve based not only on our own experiences, but on the experiences of our peers as well.

There has there been a significant shift from peer review to peer learning. Are we seeing a trend of open dialogue and discussions instead of just a scoring system? How will an increase in peer learning utilization be beneficial?

As with many things, not all feedback is of equal value.

Too often, traditional peer review has provided context-poor feedback, constrained by a scoring system. Use of these scoring systems can have two potential disadvantages. First, it may limit the review of clinical care to simply whether there was a clinically significant missed finding or not. While this is one important aspect of the care we provide as diagnostic radiologists, it fails to incorporate other facets of our work that are also critical to quality, including the clarity of the report, concordance of the interpretation with prevailing guidelines and appropriateness of recommendations, just to name a few.

Second, physicians are often concerned about the punitive or career implications of providing a low score to a colleague, which can chill honest feedback in the traditional peer review setting. The peer learning framework encourages non-punitive group discussions about the successes and failures of the care we provide. As such, based on our experience, the feedback is often more open and honest than feedback that is provided in traditional peer review settings. Additionally, feedback can be provided with much richer context and potential factors contributing to errors can be identified so that strategies for addressing such factors can be developed.

I hope that increased utilization of peer learning will improve the overall consistency and quality of feedback that we get as radiologists, thereby allowing us to provide even better care for our patients.

In a recent article in the Journal of the American College of Radiology (JACR), you wrote that your own department at Massachusetts General Hospital had abandoned traditional peer review and created its own peer learning approach: consensus-oriented group review (COGR). Can you talk a bit about your team’s approach? How has the new process impacted your department?

We have presented the consensus-oriented group review (COGR) process and our initial outcomes in two separate publications within the JACR, so I would point your readers to those papers for a detailed description. In short, COGR is a method of peer review based on group discussions of cases in a conference setting. For each randomly selected case, the group of radiologists views the images and the report together and attempts to arrive at a consensus as to whether the report needs to be changed. In cases where the group feels the report is incorrect or suboptimal, an open discussion of the case ensues. And because at least three radiologists (and often an entire clinical section, including trainees) participate in each COGR conference, both the task of evaluating a case and the fruits of the discussion are shared among a larger group of radiologists. Also, we have reassured our radiologists that discordance data from COGR are not provided to hospital administration, fostering open and candid discussions of difficult cases.

Since implementing COGR, we have seen significant improvements in our culture of safety with our radiologists feeling more comfortable identifying and openly discussing medical errors, coaching junior colleagues about misses, and sharing best practices.

What would you like to convey to those radiology departments who are still implementing a peer review program? Should they move toward peer learning?

Ideally, maybe. But in the real world this is not possible for the vast majority of radiology practices.

For instance, whether desirable or not, many hospital administrations around the country want and/or require peer review performance metrics from their radiology groups. In fact, some would argue that the Joint Commission requires a hospital to collect peer review metrics. Thus, if I as a leader of a radiology practice inform hospital administration that we have eschewed peer review for peer learning and will no longer be providing them with discordance metrics, I may soon thereafter be telling my radiology colleagues that we no longer have a contract. Many hospital administrators want, and expect, traditional peer review metrics, and they will accept no substitute.

Thus, my recommendation for groups in this situation is to do both. Continue to perform your workflow-integrated peer review, such as RADPEER, recognizing that the data and performance improvements produced by this method are less than optimal. This will provide you with the peer review metrics necessary to satisfy hospital administration. However, in addition to this, I would recommend that the group set up a regular process for peer learning—a non-punitive time of critically reviewing cases with the discussion and outcomes not reported outside of the conference. This can be quarterly, monthly or weekly, and may take the form of a formal or informal “learning opportunities” conference, the PC version of a missed case conference, that highlights both missed cases and great calls. In this way, radiology groups can maintain their contracts and their quality.

That said, I hope that in the future we will see the traditional method of radiology peer review eliminated, as I feel that it may actually be more harmful than beneficial.

""

As a senior news writer for TriMed, Subrata covers cardiology, clinical innovation and healthcare business. She has a master’s degree in communication management and 12 years of experience in journalism and public relations.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.