Five Challenges Facing Radiology in the Era of Big Data

Eliot Siegel, MD

On June 6, 2013, at the annual meeting of the Society for Imaging Informatics in Medicine (in Dallas, Texas), Eliot Siegel, MD, chief of radiology and nuclear imaging at VA Maryland Health Care System (Baltimore), copresented “Personalized Medicine.” He envisions a promising future for radiology—if the profession can surmount the obstacles that it faces, when it comes to big data. “Medicine in general is behind the curve on big data,” Siegel says, “and we have the chance to get radiology ready for the coming era of big data and personalized medicine, if we can address five key challenges.” Those challenges, he says, include the development of acquisition standards and image-quantification tools, the patient-oriented configuration of electronic medical records (EMRs), and the lack of image-tagging standards and mechanisms for sharing images and data. “Radiology, historically, has led the way in digital systems,” Siegel says. “I’d love to see diagnostic imaging lead the way, in medicine, for big-data applications but true big data require the combining of data resources in a way that is discoverable and accessible.” Acquisition Standards EMRs have no problem handling laboratory data and structured data (such as patient diagnoses and readmission rates), but radiology data represent a challenge. “Different scanners use different protocols when they acquire information, so the results are not directly comparable,” Siegel explains. “Even theoretically quantitative information, like standard uptake value on a PET scan, shows a fair amount of variability from machine to machine.” At VA Maryland Health Care System, for instance, a comprehensive database represents 15 years’ data for 22 million patients—but mining radiology-specific measurements from this wealth of information has proved challenging, to say the least. “If we are going to mine big data from radiology, we need better standardization of image-acquisition protocols,” Siegel says. “When I have a result, I should be able to expect that the result would be consistent from institution to institution, given the same patient on the same day.” Image-quantification Tools Image-quantification tools have advanced over the past few years, but a lack of standardization undermines the usefulness of the data that they generate. “Even where we think we have good quantification—for instance, in measurement of lung nodules—we have yet to standardize the method of measurement,” Siegel says. “Some measure the diameter of the nodule, while others recommend use of nodule volume. Depending on which algorithm you use, you can also get different diameters or volumetric results.” Further, quantitative analysis shouldn’t be limited to physical parameters, Siegel says. “It would be great to have a way to assess, numerically, interstitial lung disease or osteopenia,” he notes. “Statements like fair degree of osteopenia are difficult to translate into numerical assessments like those we get from laboratory tests.” This disadvantage might rear its head soon, he adds: “People are writing decision-support algorithms that take into account information in the EMR related to different qualitative numbers,” he says. “If we don’t have data that can be discovered and understood by these computer systems, we’ll have a lot of problems.” Imaging-tagging Standards Variability isn’t limited to image acquisition. Different vendors’ workstations also use proprietary methods of saving and tagging information, and even DICOM structured reporting fails to specify a standardized mechanism across different users and systems. As Siegel points out, “DICOM structured reporting is not a lingua franca for multiple different systems.” Instead, Siegel hopes to see the widespread implementation of a new standard that can be used both with DICOM structured reporting and with XML, the tagging mechanism widely used outside medical imaging. This standard, developed by the US National Cancer Institute’s caBIG Imaging Workspace, is referred to as annotation and image markup (AIM). “AIM creates a universal tagging mechanism for describing medical images,” he says. “Both XML and DICOM structured reporting can be used to describe information about (and measurements made on) an image. AIM represents an enhancement of, rather than a replacement for, DICOM structured reporting, for those embedding information about images in the DICOM metadata; it can also be stored separately from the images, using XML as non-DICOM metadata.”metadata.” EMR Configuration It’s regrettable that EMRs developed in such a way that their structure is patient oriented, Siegel says. EMR data are organized around the patient, rather than other commonalities. “Right now, the EMR is being implemented primarily as a digital representation of what was formerly on paper, without hyperlinks or indexing,” he observes. “If I want to find out what happened to a patient on June 14, 2013, I can see that he came in with a rash—but if I ask the EMR to show me every time he came in with a rash, it can’t do that.” EMRs should be configured, instead, so that they can be queried using any data point or set of data points. “If I observe findings on a patient’s chest CT that represent a combination of basilar reticular lung disease, hyperexpansion, and spontaneous pneumothorax, I need to be able to see whether other patients have presented in a similar manner, how they were treated, and what the outcomes were,” Siegel says. Image- and Data-sharing Mechanisms Siegel points out that culturally speaking, medicine is not known for its willingness to share data. “We have a culture where people keep their own data because they have an incentive to do so,” he says. “Why should I share my data with someone else if having them all to myself gives me an advantage? In general, people just don’t share their EMRs, and finding a way to change that is really important.” Clinical trials, for instance, have yielded a wealth of excellent databases that could be used to develop or enrich decision-support mechanisms—but many are gathering dust because of a lack of mechanisms through which to share them. “It’s one thing to talk about the challenges associated with mining big data,” Siegel says. “It’s another for radiology actually to think about (and work on) these issues. It’s not just a matter of saying that we’re going to mine big data; we have to work to get our house together in such a way that we can truly take advantage of them.” Cat Vasko is editor of Medical Imaging Review and associate editor of Radiology Business Journal.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.