Recall rates increase when reading radiologists work with trainees, but not cancer detection
When reading radiologists interpret mammograms with trainees, the recall rate (RR) increases, but the cancer detection rate (CDR) is unaffected, according to a recent study published by the Journal of the American College of Radiology. Do reading radiologists allow themselves to be negatively influenced by trainees?
Jeffrey R. Hawley, MD, the Ohio State University Wexner Medical Center in Columbus, and colleagues aimed to address this question by analyzing more than 47,000 mammograms from more than 34,000 patients. More than 28,000 exams were interpreted by attending radiologists alone, while more than 19,000 were interpreted by attending radiologists with assistance from a trainee.
All mammograms were interpreted between Jan. 1, 2011, and Dec. 31, 2013, and the patients were women ages 35 and older with no personal histories of prior breast cancer. Screening sites included a full-service breast imaging center, six satellite imaging centers, and a mobile mammography unit.
Overall, the RR for attending radiologists reading on their own was 14.7 percent. When a trainee was involved, the RR jumped to 18 percent.
However, the authors noted that the CDR did not see a similar increase. The rate was 5.7 per 1,000 for attending radiologists reading on their own and 5.2 per 1,000 when a trainee was involved.
“In our study, the lack of increased CDR at a significantly higher RR with the involvement of trainees compared with a single attending radiologist indicates an increased percentage of false-positive results,” the authors wrote. “This suggests that any expertise brought to image interpretation by the radiology trainees was insufficient to positively influence desired outcomes.”
According to Hawley and colleagues, reading radiologists may have unintentionally allowed their interpretations to be influenced by the trainees.
“In our study, trainees electronically annotated images to indicate findings of concern before review by attending radiologists,” the authors wrote. “The awareness of a finding annotated on the images might have contributed to increased RRs. A process whereby electronic annotations are turned off during initial review and later displayed after case review, similar to a second review with [computer-aided detection], may be beneficial to avoid unnecessary bias before thorough review. Similarly, a process whereby trainee-generated reports are not reviewed until after complete study review may be beneficial.”
Hawley et al. added that reading radiologists of varying levels of experience all appeared to be influenced by the trainees’ opinions; it happened with younger physicians and it happened with the most experienced reader of them all.
The authors also added that their study was limited by representing a single three-year experience at a single academic medical center.