Technologist ‘learning opportunities’ vastly outnumber imaging ‘do-overs’ across almost 1 million exams

Reviewing a 20-month run with a closed-loop MD-to-technologist communications tool embedded in a PACS, radiology researchers have found minor problems with image quality 10 times more common than patient callbacks for repeat imaging.

The minor problems, dubbed “technologist learning opportunities” by the team behind the study, supplied radiologists with almost 2,000 opportunities to refine the skills of techs individually and/or in groups.

The project was carried out at Brigham and Women’s Hospital in Boston and is detailed in a study published in the July-August print edition of Current Problems in Diagnostic Radiology [1].

Tracking 977,000 imaging exams, corresponding author Daniel Glazer, MD, senior author Ramin Khorasani, MD, MPH, and colleagues found close to 2,000 that generated learning opportunities.

Learning opportunities were deemed as such if image quality was somewhat suboptimal but high enough to generate a radiology report.

By comparison, only 208 imaging datasets (0.02%) were of poor enough quality to warrant bringing a patient back for re-imaging.

Outpatient MRI the Setting and Modality to Monitor Most Closely for Quality Control

The point of the project was to seek patterns in variables such as modalities, care settings and subspecialties as associated with radiologists’ real-time critiques of, or dissatisfactions with, image quality.

One of the researchers’ illuminating findings: Six radiologists accounted for more than 40% of all callbacks. From this Glazer and co-authors surmise that the subjective factor of “individual radiologist preference” is substantive.

Other key findings:

  • Both learning opportunities and callbacks were most common in the outpatient setting and for MRI.
  • There was substantial variability by division for patient callbacks. Patients whose exams were interpreted by the cardiovascular division, for example, were 200 times more likely to be called back compared to patients whose exams were interpreted by the cancer imaging division.
  • There was significant variation in number of technologist learning opportunities generated by each division. For example, learning opportunities were 40 times more common in the abdominal division compared to breast imaging.

Glazer and colleagues comment that tailored technologist training, whether for individuals or groups, is a worthwhile justification for identifying which subspecialties and modalities tend to produce the most suboptimal images.

Rad-Tech Communications via PACS Workflow Works   

Noting the volume of prior studies describing efforts to create radiologist peer learning opportunities, the authors cite one showing that a PACS-integrated peer learning tool “yields significantly more clinically significant feedback than a traditional peer review tool for radiologists” [2].

The present study builds on the body of work with its integration of a closed-loop learning tool inside a PACS workflow, they point out.

The authors recorded no significant decline in callbacks during the study period, a detail that may owe to the small slice of exams deemed deficient.

“Future studies will be needed,” they add, “to assess whether improvement initiatives to reduce factors contributing to image quality issues identified during the interpretation process will improve quality of patient care.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup