AI speeds, improves chest X-ray interpretations

Six radiologists interpreting around 500 chest radiographs with an assist from AI bested unaided radiologists in measures of efficiency and/or accuracy in a new comparative performance study.    

In the exercise, conducted at Mass General Brigham and supported by the AI vendor whose algorithm was used in the project, the gains came for the human-machine combo without occasioning significant sacrifices on specificity.

The work is described in a report posted Aug. 31 in JAMA Network Open [1].

Lead author Jong Seok Ahn, MD, of Seoul-based Lunit, senior author Mannudeep Kalra, MD, of Massachusetts General Hospital and colleagues focused the competition on four targeted findings—pneumonia, lung nodules, pleural effusion and pneumothorax.

The participating radiologists were two each of attendings, residents and thoracic imaging fellows.

Basing ground-truth diagnoses on consensus conclusions of two thoracic specialists, the researchers found AI-aided readers had significantly higher sensitivity than unassisted peers for all four findings.

Meanwhile reports were filed at a 10% quicker pace when using the algorithm (36.9 seconds) vs. going it alone (40.8 seconds).

“In this cohort study, the use of an AI algorithm was associated with sensitivity gains for all four target chest radiograph findings across all readers regardless of their experience and training status,” the authors comment.

 

Algorithm Alone Not as Sharp on Specificity

The team also reports the algorithm’s standalone performance competed with that of the radiologists on sensitivity but lagged on specificity. They surmise this may owe to the design of the AI engine’s output, which was set for labeling both airspace and interstitial opacities as pneumonia.

By contrast, the researchers explain, the ground-truth and test radiologists used the “classic airspace pattern” to label a finding as pneumonia.

“This difference in labeling likely contributed to the lower specificity for the AI stand-alone performance and is further evident from higher AI specificity on chest radiographs without any nontarget findings,” they offer.

Underscoring their focus on the four target chest X-ray findings, Ahn and co-authors point out the gains in speed and accuracy were prominent for three of the four residents and fellows.

As for the two participating attending radiologists, neither improved their efficiency by using the AI, but the tool boosted target detection for both.

 

Turnaround Times with AI ‘Will Become More Critical’ Going Forward

The authors remark that, despite AI’s reputation for generating false-positive findings, the heightened sensitivity of the present algorithm “did not come at the cost of” a significant change in specificity. Instead:

[A]ll readers were able to reject AI-detected false-positive findings while benefiting from acceptance of true-positive findings detected and marked up by the AI algorithm.”

Demonstration of noninferiority of interpretation time with AI vs non-AI interpretation, the authors add, “will become more critical as AI algorithms expand their target findings beyond a handful to a comprehensive, multi-finding detection.”

For now, they suggest, it’s clear that an AI algorithm can improve accuracy and efficiency for radiologists interpreting abnormalities on chest X-rays.

To read the full study, click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup