AI classifies free-text pathology reports

Machine learning algorithms can classify pathology reports and help providers track follow-up imaging recommendations, according to new findings published in Radiology: Artificial Intelligence.

The authors explored data from more than 2,000 pathology reports acquired between 2012 and 2018. All patients had previously been told to come back for follow-up imaging after an initial abdominal scan produced a suspicious or abnormal finding. One medical student and one radiologists annotated each report by indicating if it was relevant to the patient’s liver, pancreas, kidneys and/or adrenal glands or none of the above.   

“These organs were chosen because they represent the major categories of abnormal imaging findings followed by our tracking system,” wrote Jackson M. Steinkamp, department of radiology at the Hospital of the University of Pennsylvania, and colleagues. “Reports were labeled as relevant to an organ if any tissue sample in the report contained tissue from that organ, or if the biopsy was performed to further work up a pathology of that organ (eg, a distant metastasis).”

The researchers then tracked the performance of four automated classification methods (simple string matching, random forests, extreme gradient boosting and support vector machines), convolutional neural networks (CNNs) and long short-term memory networks, seeing which techniques excelled at organ-level classification. The ultimate goal, the team explained, was to develop a classification system that could be used within a “larger system of automated radiology recommendation follow-up tracking.”

Overall, the F1 scores of the various techniques were as follows: 96.3% for CNNs, 96.7% for long short-term memory networks, 93.9% for extreme gradient boosting, 89.9% for support vector machines, 82.8% for random forests and 75.2% for simple string matching.

“We provide evidence that end-to-end neural network architectures perform well on a clinical text-classification task with high levels of human interpretability,” the authors wrote. “Such systems have the potential to improve information extraction and summarization in a wide variety of clinical contexts, toward the ultimate end of improving care quality and efficiency.”

In a related editorial, Tiffany Ting Liu, PhD, of Stanford University praised the researchers’ success, noting that their algorithms “achieved excellent performance in classifying pathology reports into four relevant organ classes.”

However, she added, “extraction and classification of information at a more fine-grained level may be required.”

“Although limited by the availability of training data, the study demonstrates the feasibility of a more complex 12-organ classification task,” Liu wrote. “Future studies with more detailed labeled data and complex tasks may be warranted.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.