What Has Artificial Intelligence Done for Radiology Lately?

From basic sorting algorithms to sophisticated neural networks, AI and its offspring continue to generate buzz throughout medicine, business, academia and the media. Much of the chatter amounts to no more than hot air. The most farfetched imaginings are usually easy to spot and dismiss.

Yet accounts of real-world AI deployments—applications with strong potential to improve patient care while cutting costs—are amassing into a category of medical literature in its own right.

RSNA’s Radiology: Artificial Intelligence is the first peer-reviewed journal to focus entirely on the technology, signaling radiology’s place at the forefront of the AI revolution within healthcare. But the journal and the profession are both likely to have lots of company at the head of the class before long.

With those observations in hand, RBJ sought out a representative sampling of radiologists who are not just talking about AI but also using it to beneficial effect. Here are five highlights from what we found.

An Unfractured View of Fractures

The diagnosis of elbow injuries in children and adolescents is difficult because the developing skeletal system has unique features not seen in adults. For example, cartilage still comprises a portion of pediatric elbow joints, making certain injuries impossible to detect via X-rays. Additionally, pediatric elbow joints have many growth centers that ossify as children grow. As a result, fracture patterns vary significantly depending on an individual child’s age and unique developmental factors.

During their tenure in the diagnostic radiology program at Baylor College of Medicine in Houston, Jesse Rayan, MD, now an imaging fellow at Massachusetts General Hospital, and Nakul Reddy, MD, an interventional radiology fellow at MD Anderson Cancer Center, developed a means of using AI to overcome these and other diagnostic challenges by automating the analysis of pediatric elbow radiographs.

The model leverages a combination of a convolutional neural network (CNN) and recurrent neural network (RNN) to process multiple images together. A CNN is a deep learning algorithm that can take in an input image, assign learnable weights and biases to various aspects of or objects in that image, and differentiate one aspect or object from another. An RNN recognizes patterns in sequences of data and images, decomposing the latter into a series of patches and treating them as a sequence.

“A simpler way to think about this would be to consider the CNN as analogous to the human visual cortex,” Rayan explains. “It’s able to recognize patterns in an image and classify if something is present or not. This works great for single images, but many radiographic studies have more than one image for a single study.”

In the elbow application developed at Baylor, the RNN is used to effectively process multiple outputs—i.e., multiple views of the pediatric elbow—passed through the CNN before arriving at a single decision point.

Rayan, Reddy and colleagues tested the method on 21,456 X-rays containing 58,817 images of pediatric elbows and associated radiology reports, all captured at Texas Children’s Hospital in Houston. Accuracy on the studied dataset was found to be 88%, with sensitivity of 91% and specificity of 84%.

The researchers concluded that deep learning can effectively classify acute and nonacute pediatric elbow abnormalities on radiographs in a trauma setting. “A recurrent neural network was used to classify an entire radiographic series, arrive at a decision based on all views and identify fractures in pediatric patients with variable skeletal immaturity,” they underscore.

Radiology: Artificial Intelligence published the study in its inaugural issue (“Binomial Classification of Pediatric Elbow Fractures Using a Deep Learning Multiview Approach Emulating Radiologist Decision Making,” January 2019).

Rayan tells RBJ the model’s greatest potential to improve patient care lies in ER triage, where studies that need more attention can be prioritized for quicker turnaround. Additionally, he notes, using AI in this context could enhance the caliber of patient care by allowing radiologists to home in on specific findings that might otherwise be overlooked.

Rayan believes this is just one example of how AI can assist radiologists in clinical decision-making without delay. If AI tools help radiologists increase the accuracy of their diagnoses and recommendations, he asserts, costs can be reduced across hospitals and health systems.

Going forward, Rayan plans to work on similar projects in the ER radiology setting, including studying whether deep learning can effectively triage studies in other emergent situations. He deems increasing radiologists’ confidence in their decisions without slowing turnaround times “the most exciting application of this technology.”

Another Eye on Alzheimer’s

Physicians face a tough dilemma when diagnosing older patients with memory issues. Once organic causes like stroke, infection and Parkinson’s have been ruled out, they must determine whether these individuals have dementia, Alzheimer’s disease or, alternatively, some form of mild cognitive impairment.

Radiologist Jae Ho Sohn, MD, of UC-San Francisco and colleagues developed an algorithm that addresses this dilemma by analyzing FDG-PET scans of patients whose memory no longer appears to be functioning properly. Based on this analysis, the algorithm provides what Sohn considers a “highly accurate prediction that can boost the confidence of Alzheimer’s disease diagnosis or rule it out.”

The algorithm looks for subtle, slow diffuse processes and global changes in the brain that are difficult to see with the naked eye, such as changes in glucose uptake.

“Traditionally, radiologists analyze the patterns of reduced glucose uptake,” Sohn explains. “For Alzheimer’s disease, symmetric reduction of glucose uptake in the temporal and parietal lobes of the brain has been the most specific finding. But these classic findings manifest later in Alzheimer’s disease” than the changes identified by the algorithm.

Moreover, the algorithm considers the whole picture of the brain on the FDG-PET scan to make its prediction. “We show a saliency map image that demonstrates where the algorithm is looking in the brain, and it covers the entire brain—not just one region,” Sohn says.

Sohn and colleagues trained the algorithm on images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a massive public dataset of PET scans conducted on patients who were eventually diagnosed with either Alzheimer’s Disease, mild cognitive impairment or no disorder. Over time, the algorithm learned on its own which features and patterns are important for predicting the diagnosis of Alzheimer’s disease and which are insignificant.

The researchers tested the algorithm on two novel datasets after it had been trained on 1,921 scans. One dataset contained 188 images from the same ADNI database; these had not yet been presented to the algorithm. The other was an entirely novel set of scans from 40 patients of the UCSF Memory and Aging Unit, all of whom had presented with possible cognitive impairment. The algorithm correctly identified a respective 92% of patients from the first test set and 98% of patients from the second test set who eventually developed Alzheimer’s disease.

These predictions were made slightly more than six years, on average, before the patient received a final diagnosis. Sohn et al. report the research in “A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain” (Radiology, online Nov. 6, 2018).

“With further large-scale external validation on multi-institutional data and model calibration, the algorithm may be integrated into clinical workflow and serve as an important decision-support tool to aid radiology readers and clinicians with early prediction of Alzheimer’s disease,” the authors write.

Sohn believes the algorithm may greatly enhance radiologists’ ability to accurately predict whether a patient with memory issues will progress to Alzheimer’s disease. It also could add confidence to Alzheimer’s disease diagnoses
made by neurologists—before all the symptoms have manifested.

“An even more important implication is that, without early diagnosis of Alzheimer’s disease”—for example, as enabled by the algorithm—“there will likely be no cure or stopping the progression,” Sohn says.

He adds that by expanding the indications for FDG PET scans of the brain, which are currently not routinely used to diagnose Alzheimer’s disease in patients who present with memory impairment, the algorithm will generate more work for radiologists and create a need for additional radiologist manpower. However, it has the potential to increase radiology revenues from performing additional FDG PET studies.

Sohn is currently focusing on streamlining and optimizing the methodology of applying AI to radiology research so that similar advancements can be made in other clinical areas. Accordingly, he is building a large, well-organized database of images and annotations designed to support an approach wherein radiologists first detect patterns in very big radiological data and then identify which patterns are of clinical significance. This approach would replace traditional,
hypothesis-based research and is now being applied in brain CT, lung cancer, health economics and dermatopathology datasets.

Objectivity Aids Prostate Assessment

In recent years, multiparametric MRI has become an important modality for assessing prostate cancer, but the process of interpreting mpMRI prostate studies is variable because of its subjective nature. One team of researchers
is working to wring clarity from the fuzziness with the creation of a new framework for predicting the progression of prostate cancer, specifically by differentiating between low- and high-risk cases.

The framework melds radiomics—the use of algorithms to extract large amounts of quantitative characteristics from images—with machine learning. Among members of the team that developed the technique were Guarav Pandey, PhD, of the Icahn School of Medicine at Mount Sinai in New York City and radiologists Vinay Duddalwar, MD, and Bino Varghese, PhD, of the Keck School of Medicine at the University of Southern California.

“Machine learning-based methods, the specific form of AI used” to create the framework, “are designed to sift through large amounts of data—structured or unstructured and without any particular guiding biomedical hypothesis—to discover potentially actionable knowledge directly from data,” Pandey says.

One form of this knowledge is a predictive model that shows a mathematical relationship between the features in the data describing an entity of interest—say, a patient—and an outcome or label such as the disease status.

The framework harnesses seven established classification algorithms to predict and assign a prostate cancer aggressiveness risk label (high or lower) for each patient from the radiomics features extracted from that patient’s mpMRI images.

A trial of the framework was presented in “Objective Risk Stratification of Prostate Cancer Using Machine Learning and Radiomics Applied to Multiparametric Magnetic Images,” a retrospective study published in Scientific Reports (online Feb. 7, 2019). The study involved 68 prostate cancer patients and was based on mpMRI images, along with transrectal ultrasound-MRI fusion-guided biopsy of the prostate performed within two months of mpMRI. The framework was shown to offer a high sensitivity and predictive value on the strength of the combination of machine learning with radiomics. Large training and validation datasets yielded more accurate predictions than prior studies, Pandey et al. write.

According to the research team, the value of the framework in enhancing the quality of patient care extends beyond improving diagnostic confidence: It offers more precise clinical information related to individual patients’ data.

Specifically, it does not simply identify a patient’s cancer; instead, it allows radiologists to differentiate between patients with aggressive prostate cancer and indolent prostate cancer. This, Duddalwar and Varghese assert, has numerous implications for the physician team preparing a treatment plan for a patient. For instance, a more radical
approach to treatment may be instituted earlier rather than at a later stage, or follow-up imaging may be performed at a different interval during treatment. In addition, alternative and supplemental treatments, such as radiotherapy and chemotherapy, may be introduced earlier in treatment if indicated.

Duddalwar and Varghese note that their model doesn’t add to costs already incurred, as it’s based not on standard-of-care imaging protocols but on an innovative means of analyzing routinely collected images.

They also emphasize that only a minimal amount of time is needed to implement the machine learning-based risk classifier through a graphical user interface and a few keystrokes or mouse clicks. Consequently, the big benefits of better diagnosis and thus prognosis should outweigh the cost of utilizing AI in this context.

In tandem with AI experts like Pandey, Duddalwar and Varghese are presently involved in several projects involving the diagnosis and prognosis of various diseases using a combination of imaging and AI techniques. For example, they are integrating imaging and molecular/genomic attributes in bladder and renal cancers, with the objective of finding patterns in clinical imaging.

Such patterns, Duddelwar and Varghese explain, could suggest specific mutations and molecular attributes in particular patients, in turn providing physicians with more information about which treatment(s) would be better suited for a given
individual. Another project in the exploration stages involves the use of AI techniques to identify radiomic features in oncologic imaging as a means of best predicting patients’ response to immunotherapy.

Patterns Emerge in Mammography

Connie Lehman, MD, PhD, Massachusetts
General Hospital

Decades into research and awareness efforts, effective early detection of breast cancer remains a challenge. To reverse the tide, researchers from Massachusetts General Hospital and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) built a deep-learning model that, based on a mammogram image alone, can predict as far as five years in the future whether a patient is likely to develop breast cancer.

Using mammograms and outcomes from more than 60,000 of the hospital’s patients, the research team trained the model to pick out subtle patterns in breast tissue that are known precursors of breast cancer, explains Mass General radiologist Constance Lehman, MD, PhD. Among the team members was AI expert Regina Barzilay, PhD, of MIT’s computer science and AI lab. Lehman and Barzilay previously created an AI algorithm that measures breast density at the level of an experienced clinician and has been in use since January 2018.

“The model deduces the patterns that drive future cancer right from the data,” Lehman tells RBJ. “These patterns are so subtle, it’s impossible for us to see them with the naked eye.”

In “Deep Learning Mammography-Based Model Can Improve Breast Cancer Risk Prediction,” published in the May 2019 edition of Radiology, Barzilay, Lehman and co-authors report that their model performed significantly better than
other models designed for the same purpose, accurately placing 31% of all cancer patients in its highest-risk category. By comparison, traditional models correctly placed only around 18% of these patients.

Lehman says the model is a game-changer, as the detailed information it can provide allows for more precise assessment of breast cancer risk at the level of the individual patient. 

“Women have unique and very variable patterns of breast tissue that we can see on their mammogram—patterns that represent the influence of everything from genetics, hormones and pregnancy to lactation, diet and changes in weight,” Lehman says. “By identifying them through deep learning, we remove generalization” from the equation.

The model enables the same level of customization and personalization when it comes to breast cancer screening and prevention programs, regardless of varying recommendations put forth by the American Cancer Society and the U.S. Preventative Task Force. While the former advocates annual breast cancer screening starting at age 45, the latter recommends screening every other year, beginning at age 50.

“Instead of fitting women into a box and following either one of these recommendations, we can plan screening based on a particular patient’s risk of developing breast cancer, in keeping with what is indicated by the tool,” Lehman says.
“Some women may be told to have a mammogram every two years, while women whose risk is found to be higher might be steered toward supplemental screening. But across all patients and groups, diagnosis comes sooner.”

Moreover, Lehman observes, the model should take the accuracy of breast cancer risk assessment to a higher level. This, she says, was not possible with earlier models developed only on breast MRIs of Caucasian women.

“Our model makes breast cancer detection a more equitable process because it was built using images from Caucasian and non-Caucasian women alike, and it’s accurate for white and black women,” Lehman states. Such equitability is critical given that black women are, due to such factors as differences in detection and access to health care, 42% more likely than other women to die from breast cancer.

Additionally, the model may help to reduce the waste of imaging and other care resources, thus driving costs incurred by radiology practices and imaging departments downward. “This comes from the targeted care it enables,” Lehman says.

Next on the agenda is developing an AI tool for triaging patients. This will allow physicians to determine whether mammography patients are at greater risk for developing health problems other than breast cancer—for example, a different kind of cancer or cardiovascular disease.

Off to the MRI Races

MRI scans typically take 20 to 60 minutes to complete—considerably longer than, say, CT or x-ray. Under the umbrella of an AI-centric project called fastMRI, two very unlikely partners—the New York University School of Medicine’s Department of Radiology and social media giant Facebook—are teaming up to make the modality 10 times faster than it is today.

FastMRI calls for speeding up MRI studies by capturing less data. An artificial neural network is trained to recognize the underlying structure of the images being captured. It harnesses this training to fill in the views that are missing from “fast scans,” producing the image detail necessary for accurate detection of abnormalities.

The imaging dataset used to train the neural network was collected exclusively by NYU School of Medicine and consists of 10,000 clinical cases comprising approximately 3 million MRI images of the knee, brain and liver.

Daniel Sodickson, MD, PhD, New York University/NYU Langone Health

NYU professor Daniel Sodickson, MD, PhD, likens the AI-based image reconstruction technique behind fastMRI to the way in which the eyes and brain function when making out objects in low light.

“We don’t have a complete view of the object, because of the dark,” he elaborates. “However, we know in our brain what the underlying structure of the object is, and we quickly and accurately ‘fill in’ all of the details.”

So far, the team is seeing “encouraging results in the knee MRIs” less than a year into the project, Sodickson says. “We’ve accelerated these scans by a factor of six compared to standard MRI scans, and radiologists cannot distinguish between the images from both types of scans,” he adds. “So we’re halfway there with that.”

Speeding up MRI scans using AI will yield a multitude of benefits that will have a positive impact on patient care, Sodickson says. Reduced time in the scanner will lead to a better MRI experience for patients—especially children, the critically ill and individuals who have difficulty lying down or remaining still. Less time means less movement, which generally translates to optimal image quality and fewer patient back-outs. The end-result is more accurate diagnosis and a high likelihood of appropriate treatment.

Then too, there’s decreased wait time for patients in the queue. “The faster each MRI study is done, the greater the number of patients who can be scanned each day on each unit and have access to care,” Sodickson explains. “Access to MRI in general is increased, too, which has positive effect on patient care as well. For example, in broad geographic areas where there’s only one MRI scanner, the wait for an MRI study could be really long. But if that scanner is being used to perform more scans per day, less delay follows and triaging cases is easier.”

Additionally, increasing exposure speed allows motion to be frozen out and a clearer view of the anatomy to be gained.

Sodickson emphasizes that the project is HIPAA-compliant. All MRI images being utilized in the endeavor have been scrubbed of distinguishing features, and no Facebook data of any kind are being used. NYU Medical Center and Facebook are open-sourcing their work so that other researchers can build on the developments.

Following their initial success, Sodickson and his team have begun to test a handful of other MRI accelerations. And they’re looking beyond musculoskeletal applications.

“Now that we’ve seen encouraging results with the knee imaging, we’re starting on other datasets,” he says. “We hope to have good documentation and clinical evaluations before the year is out and to begin accelerating brain MRI along with body MRI—specifically, faster MRI of the abdomen, liver and kidneys.

“The potential for AI is deep. We’re just getting started,” Sodickson says. “In some applications, faster MRI may let patients avoid the exposure to ionizing radiation that occurs with X-rays and CTs. But I envision a day when scanners can be changed to gather just the data that’s needed. That’s long-term, of course, but it’s the power of AI in radiology.”

Julie Ritzer Ross,

Contributor

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.