RadNet: General radiologists achieve specialist-level performance interpreting mammograms with help from AI

General radiologists can achieve specialist-level performance interpreting mammograms with help from artificial intelligence, according to new research from RadNet Inc.

The Los Angeles-based provider—and its AI division, DeepHealth—recently assessed the reading skills of 18 physicians, half of whom were breast specialists and the remaining generalists. Those involved analyzed 240 retrospectively collected digital breast tomosynthesis scans, looking for signs of cancer.

Every radiologist demonstrated improved performance when reading with the help of Saige-DX, a custom-built categorical AI system. On average, they recorded an area under the receiver operating characteristic curve (a measurement of diagnostic accuracy) at 0.93 compared to 0.87 without AI, experts detailed in Radiology: Artificial Intelligence [1].

“In conclusion, our results show that general radiologists may achieve specialist-level performance when interpreting screening DBT mammograms with the aid of AI and that specialists can achieve even higher performance (increased sensitivity and specificity) in a diverse population across multiple cancer types,” Jiye G. Kim, PhD, DeepHealth’s director of clinical studies, and colleagues wrote Feb. 7.

Improvements were seen for both general radiologists (adding 0.08 to their AUC score) and breast imaging specialists (0.05). And such gains persisted across all cancer characteristics (i.e., lesion type or size) and patient subgroups (race and ethnicity, age and breast density).  

“The mean performance of general radiologists with AI exceeded that of breast imaging specialists unaided by AI, suggesting that the AI software could help patients receive specialist-level interpretations for their screening mammogram even if interpreted by a general radiologist,” the authors reported. “The benefits of using AI are not limited to generalists, as specialists also showed improved performance.”

Kim et al. offered several potential reasons for the study’s “strong” results. The AI system was built with difficult cases in mind, with the training data set for the algorithm incorporating cancers missed by radiologist in clinical practice. Also, the limited number of “bounding boxes” outputted for breast images may have helped radiologists “avoid the potential distraction caused by the many marks placed by less specific computer-aided software tools.”  

“While it is tempting to speculate that the core AI algorithm has superior diagnostic performance, the way in which human readers interact with the output of AI is increasingly recognized as a substantial contributor,” the authors noted. “Further investigations are underway to better understand how AI can best assist human readers in the difficult task of cancer screening.”

Read much more, including potential study limitations, at the link below.

Marty Stempniak

Marty Stempniak has covered healthcare since 2012, with his byline appearing in the American Hospital Association's member magazine, Modern Healthcare and McKnight's. Prior to that, he wrote about village government and local business for his hometown newspaper in Oak Park, Illinois. He won a Peter Lisagor and Gold EXCEL awards in 2017 for his coverage of the opioid epidemic. 

Around the web

The patient, who was being cared for in the ICU, was not accompanied or monitored by nursing staff during his exam, despite being sedated.

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.