AI trained on adult data set performs adequately when turned toward pediatric population

An artificial intelligence algorithm trained on adult medical images performed “adequately” when turned toward a pediatric population, according to an analysis published Monday in JACR [1].

Despite an explosion in new AI tools available to assist radiologists in their work, there remains a paucity of options in the pediatric space. Scientists with the University of Maryland recently set out to solve this challenge, testing technology used for processing chest X-rays to diagnose pneumonia.

They found encouraging early results, with AI pinpointing about 67% of pneumonia cases and 33% of normal instances on a data set of 5,856 pediatric images.

“Our findings suggest that using adult data sets in diagnosing pediatric cases may be a useful shortcut in addressing the paucity of pediatric data sets for AI model training,” corresponding author Jean Jeudy, MD, with the Maryland Department of Diagnostic Radiology & Nuclear Medicine, and co-authors concluded. “Further studies are needed to determine whether this would be effective for different age groups and conditions and for validation across multiple centers.”

For the study, Jeudy and colleagues utilized a publicly available pediatric data set from the Guangzhou Medical Center in China, which included thousands of frontal-view chest X-rays from kids age 1 to 5. They utilized TorchXRayVision, an open-source software tool for working with chest radiographs and deep-learning models.

Performance of the algorithm was “satisfactory” for overall findings and in the bacterial pneumonia subset (F1-score = 0.83 and 0.81, respectively). However, it decreased in cases of viral pneumonia, “likely attributable to the different pattern of findings” between the two types. Negative predictive value and specificity also were low, “implying a number of false negative results (n=862).” There also was a “sizeable” number of false positives at 512, contributing to the low sensitivity tally.

“This would be a likely consequence of developmental and anatomical disparities between young pediatric patients versus fully developed adults,” the study noted. “In addition, it could also be explained by the presence of a significant number of expiratory views as a pediatric patient, 1 to 5 years of age, is less likely to cooperate with breathing instructions.”

Jeudy and co-authors also found that the model was outperformed by several others that were trained specifically on pediatric images. But such models only used a small test sample size and lacked external validation.

“Given the relative scarcity of pediatric data sets and AI applications, we encourage attempting to leverage adult models and data sets to accelerate pediatric AI research and the implementation of its clinical applications,” the authors wrote in the study’s take-home points.

Read much more, including potential limitations, in the Journal of the American College of Radiology at the link below.

Marty Stempniak

Marty Stempniak has covered healthcare since 2012, with his byline appearing in the American Hospital Association's member magazine, Modern Healthcare and McKnight's. Prior to that, he wrote about village government and local business for his hometown newspaper in Oak Park, Illinois. He won a Peter Lisagor and Gold EXCEL awards in 2017 for his coverage of the opioid epidemic. 

Around the web

After reviewing years of data from its clinic, one institution discovered that issues with implant data integrity frequently put patients at risk. 

Prior to the final proposal’s release, the American College of Radiology reached out to CMS to offer its recommendations on payment rates for five out of the six the new codes.

“Before these CPT codes there was no real acknowledgment of the additional burden borne by the providers who accepted these patients."

Trimed Popup
Trimed Popup