Deep learning significantly reduces thoracic radiologists’ errors when used as a second reader
A deep learning-based algorithm showed “excellent” performance in spotting lung cancers missed on chest x-rays, according to an analysis published Thursday.
The tool also significantly reduced thoracic radiologists’ mistakes when deployed as a second reader, Korean imaging experts reported in Radiology: Cardiothoracic Imaging. Based on their results, researchers believe such commercially available AI assistants could prove pivotal in addressing high error rates among one of the most common types of imaging exams.
“Various strategies have been proposed to reduce reading errors, including careful comparison with existing radiographs and double reading; however, often these are not feasible in routine practice,” Ju Gang Nam, with the Department of Radiology at Seoul National University Hospital, and colleagues wrote Dec. 10. “It is encouraging that the algorithm showed good sensitivity that was higher than that of the thoracic radiologists (69.6% vs 47%) as well as higher specificity (94% vs 78%).”
For the analysis, Nam and colleagues used a retrospective set of 168 chest radiographs that included 187 instances of lung cancer, along with 50 more normal x-rays. They tasked four rads with independently reevaluating the images, looking for lung nodules with and without the help of a deep learning program developed by Seoul-based Lunit.
Bottom line: The algorithm demonstrated “excellent” diagnostic performance when measured by per-chest radiograph classification and per-lesion localization. Both values, researchers reported, were significantly higher than physicians’ scores. When paired together with deep learning, thoracic radiologists’ performance showed substantial gains.
“These results suggest that this algorithm has potential to improve the detection rate of lung cancers that prove challenging to radiologists,” experts advised.
You can read much more about their results in RSNA’s Cardiothoracic Imaging journal here.