State of the Technology: Geoffrey Rubin, MD, on 3D Visualization
As CT technology continues to advance and the number of slices in a given exam grows exponentially, how is the role of enterprise visualization software evolving to suit the needs of the modern radiology department? ImagingBiz.com speaks with Geoffrey Rubin, MD, professor of radiology and vice chief of staff at Stanford University Hospitals and Clinics, Stanford, California, on the state of the technology.
Geoffrey Rubin, MD ImagingBiz.com: Which enterprise visualization tools have proved most useful in enhancing 3D interpretations of cardiac imaging? How can techniques like bone removal and artery isolation be leveraged for more rapid speed of interpretation, and what are the hazards involved? Rubin: The tools most useful for cardiac imaging would be multiplanar reformation, curved planar reformation, and volume rendering. There are hazards in using bone removal and arterial-tree isolation; whenever you perform any segmentation tasks, you run the risk of removing structures that you didn’t intend to remove. It’s important to be careful when using these tools. For the heart, in particular, bone removal isn’t so important. Vessel-tree extraction tends to be used to create volume renderings or maximum-intensity projections of the coronary arteries, but oftentimes, it’s most desirable to use volume rendering to see coronary arteries within the context of adjacent anatomy, such as the surface of the myocardium. I don’t find myself using these types of segmentation tools for cardiac imaging specifically. In fact, when assessing the coronary arteries, I stick with multiplanar and curved reformations. For other applications of workstations—for extracardiac CT angiography, for instance—bone removal can be very useful, such as with a lower-extremity CT runoff. There again, though, it can be risky, and you have to be diligent to make sure you haven’t removed adjacent vessels along with the bone. ImagingBiz.com: Which tools have proved most useful in enhancing 3D interpretations of pulmonary imaging? Rubin: Routine application of 3D processing for pulmonary imaging is still controversial. From the standpoint of most accurately assessing and following lung lesions, some type of volumetric measurement of their size is the most logical approach, but there isn’t yet enough consistency among techniques. ImagingBiz.com: You’ve done a great deal of research on computer-aided detection for lung nodules. How does the use of computer-aided detection enhance lung-cancer diagnosis? Does it adversely affect workflow? Rubin: There again, computer-aided detection in the lungs has been used mostly in a research setting; there are few implementations in commercial products. One of the real challenges with computer-aided detection is that it detects areas in lung CT images that are suspected of being lung nodules—structures that are defined as focal lung opacities that range from 3 cm in size on down; beyond that definition, the likelihood that any of those nodules represents a lung cancer is highly dependent on the size of the lesion and who the patient is. We know computer-aided detection will detect more, or at least different, lung nodules than experienced CT readers. We find more nodules, and there’s more equivalency of performance across readers when they use computer-aided detection. What we don’t know is whether or not those detected nodules will ultimately become cancer. That will require more investigation and more understanding. It’s complicated further by the fact that we don’t know whether detecting cancers at an early stage means that we can act on those cancers and allow people to live longer than they would have if the cancer were allowed to grow until it became symptomatic. That’s the subject of the National Lung Screening Trial, looking at CT as a screening tool, which we hope will give us the answer. The ultimate utility of computer-aided detection in detecting lung cancer, however, is dependent on many other characteristics that are independent of the computer algorithm. At one end of the spectrum, computer-aided detection could find early cancers in a setting where their detection is valuable and important. At the other end, computer-aided detection could be finding a bunch of things in the CT scan that either correspond to a real structure in the lung or do not, resulting in additional patient anxiety, work-up, and expense, and in further imaging that isn’t necessary. ImagingBiz.com: How does acquisition technique affect cardiovascular and lung imaging? How are advances in 3D and 4D acquisition affecting advanced visualization? Rubin: Acquisition technique affects everything in CT. You need to have the best acquisition you can when you’re considering doing 3D. By best, I mean something that considers the need for high spatial resolution through three dimensions, but is also not too noisy, and that respects the patient and the amount of radiation exposure that it’s reasonable to deliver. While it would be simplest for me to say everyone should have the thinnest sections with the lowest-noise protocol, that would mean maximizing the radiation output, and many patients would not benefit from that. For some applications, you can accept a little more noise and a lower resolution. Depending on what part of the body is being imaged, there are different considerations. An upper or lower extremity is highly suited for the highest-resolution scans; these are parts that are relatively radiation insensitive, unlike the neck and torso, for instance. In smaller body parts, you can use the thinnest sections and avoid a high x-ray–tube output while providing low-noise images because there’s less tissue to attenuate the beam. In general, we use the thinnest sections in the smaller body parts and in body parts where we have less concern about the effect of radiation. The thinnest sections are at the submillimeter level. For anything related to 3D, I would not accept datasets with anything greater than 1.5 mm and prefer reconstruction with 0.7-mm increments. There are a tremendous number of new CT acquisition technologies coming down the pike, but there haven’t been enough data to see how they’re affecting imaging per se. It’s too early to say what the specific impact is, but in general, I think these new technologies are likely to result in reductions in radiation exposure, improved image quality, and higher-resolution acquisition in both the spatial and temporal domains. ImagingBiz.com: How has enterprise visualization evolved to match advances in CT technology? What challenges arise from working with ever-larger datasets, and how can advanced visualization help address these challenges? Rubin: It’s key that advanced visualization be brought to bear as the datasets get larger. These tools are intended to facilitate the rapid and effective navigation of large datasets. What we’ll see more of is attention not just to pure visualization tools, but to more computer vision, where the computer is actually identifying features and then characterizing them, quantifying them, or at least highlighting them. Computer-aided detection, for instance, is a type of computer vision. You’re asking the computer to do more than just display images. It’s hard to predict how these algorithms will evolve, but they will become increasingly important in defining how imagers can look at and manage datasets. ImagingBiz.com: You mentioned earlier that much of this computer-vision technology is not ready for prime time. What needs to change in order to usher in the future that you envision for enterprise visualization? Rubin: Computer vision is complicated stuff. What needs to happen is that in addition to developing an algorithm that’s capable of distinguishing normal from abnormal structures, the software needs to do it in a context where it’s presenting information that’s useful. At the basic level, it should improve efficiency, allowing physicians to be as accurate and efficient as possible without increasing false positives or false negatives. In its most refined state, it would be augmenting what a physician would do without it. That’s a lot more responsibility there. With graphics, here’s a picture, do with it what you will. Once you apply computer vision, there is the risk of the computer making nothing seem like something significant, or eliminating something significant from the display. Much more research and investigation are required. The efficiency side of things is heavily dependent on the implementation and the interface. There’s the algorithm that finds the suspicious area, and then there’s the way the software shows it to the radiologist. A great algorithm could bring value, but if the interface is too cumbersome to use, you eliminate the efficiency advantage. It’s more than just the algorithm. It’s also putting it into a smart user interface. ImagingBiz.com: Stanford has a well known and very busy 3D laboratory. Does the laboratory support radiology workflow, and if not, what kind of access do radiologists have to their own enterprise visualization tools? Rubin: Radiologists all have access to those tools. The 3D laboratory is very valuable for specific reasons, but it does not eliminate the need for radiologists to do primary exploration of the data. The analogy I like to use is that when you consider a typical ultrasound exam, a sonographer examines the patient and records protocol-determined and standardized views of the anatomy. This provides for a consistent output that is reliable for both the radiologists and the referring physicians, but it’s not uncommon that there are abnormalities that are not fully understood by the sonographer. It’s then up to the physician to pick up the probe and explore directly. The same is true of a CT dataset. The data in the workstation are like a virtual patient in the ultrasound suite. The mouse attached to the workstation can be used as a probe to allow exploration. The 3D technologists follow strict protocols to fashion and create standardized views, but the physicians still need to interact directly and to explore the data themselves, to search through and find what might not necessarily be seen or recorded by the technologist.Cat Vasko is editor of ImagingBiz.com and associate editor of Radiology Business Journal.
Geoffrey Rubin, MD ImagingBiz.com: Which enterprise visualization tools have proved most useful in enhancing 3D interpretations of cardiac imaging? How can techniques like bone removal and artery isolation be leveraged for more rapid speed of interpretation, and what are the hazards involved? Rubin: The tools most useful for cardiac imaging would be multiplanar reformation, curved planar reformation, and volume rendering. There are hazards in using bone removal and arterial-tree isolation; whenever you perform any segmentation tasks, you run the risk of removing structures that you didn’t intend to remove. It’s important to be careful when using these tools. For the heart, in particular, bone removal isn’t so important. Vessel-tree extraction tends to be used to create volume renderings or maximum-intensity projections of the coronary arteries, but oftentimes, it’s most desirable to use volume rendering to see coronary arteries within the context of adjacent anatomy, such as the surface of the myocardium. I don’t find myself using these types of segmentation tools for cardiac imaging specifically. In fact, when assessing the coronary arteries, I stick with multiplanar and curved reformations. For other applications of workstations—for extracardiac CT angiography, for instance—bone removal can be very useful, such as with a lower-extremity CT runoff. There again, though, it can be risky, and you have to be diligent to make sure you haven’t removed adjacent vessels along with the bone. ImagingBiz.com: Which tools have proved most useful in enhancing 3D interpretations of pulmonary imaging? Rubin: Routine application of 3D processing for pulmonary imaging is still controversial. From the standpoint of most accurately assessing and following lung lesions, some type of volumetric measurement of their size is the most logical approach, but there isn’t yet enough consistency among techniques. ImagingBiz.com: You’ve done a great deal of research on computer-aided detection for lung nodules. How does the use of computer-aided detection enhance lung-cancer diagnosis? Does it adversely affect workflow? Rubin: There again, computer-aided detection in the lungs has been used mostly in a research setting; there are few implementations in commercial products. One of the real challenges with computer-aided detection is that it detects areas in lung CT images that are suspected of being lung nodules—structures that are defined as focal lung opacities that range from 3 cm in size on down; beyond that definition, the likelihood that any of those nodules represents a lung cancer is highly dependent on the size of the lesion and who the patient is. We know computer-aided detection will detect more, or at least different, lung nodules than experienced CT readers. We find more nodules, and there’s more equivalency of performance across readers when they use computer-aided detection. What we don’t know is whether or not those detected nodules will ultimately become cancer. That will require more investigation and more understanding. It’s complicated further by the fact that we don’t know whether detecting cancers at an early stage means that we can act on those cancers and allow people to live longer than they would have if the cancer were allowed to grow until it became symptomatic. That’s the subject of the National Lung Screening Trial, looking at CT as a screening tool, which we hope will give us the answer. The ultimate utility of computer-aided detection in detecting lung cancer, however, is dependent on many other characteristics that are independent of the computer algorithm. At one end of the spectrum, computer-aided detection could find early cancers in a setting where their detection is valuable and important. At the other end, computer-aided detection could be finding a bunch of things in the CT scan that either correspond to a real structure in the lung or do not, resulting in additional patient anxiety, work-up, and expense, and in further imaging that isn’t necessary. ImagingBiz.com: How does acquisition technique affect cardiovascular and lung imaging? How are advances in 3D and 4D acquisition affecting advanced visualization? Rubin: Acquisition technique affects everything in CT. You need to have the best acquisition you can when you’re considering doing 3D. By best, I mean something that considers the need for high spatial resolution through three dimensions, but is also not too noisy, and that respects the patient and the amount of radiation exposure that it’s reasonable to deliver. While it would be simplest for me to say everyone should have the thinnest sections with the lowest-noise protocol, that would mean maximizing the radiation output, and many patients would not benefit from that. For some applications, you can accept a little more noise and a lower resolution. Depending on what part of the body is being imaged, there are different considerations. An upper or lower extremity is highly suited for the highest-resolution scans; these are parts that are relatively radiation insensitive, unlike the neck and torso, for instance. In smaller body parts, you can use the thinnest sections and avoid a high x-ray–tube output while providing low-noise images because there’s less tissue to attenuate the beam. In general, we use the thinnest sections in the smaller body parts and in body parts where we have less concern about the effect of radiation. The thinnest sections are at the submillimeter level. For anything related to 3D, I would not accept datasets with anything greater than 1.5 mm and prefer reconstruction with 0.7-mm increments. There are a tremendous number of new CT acquisition technologies coming down the pike, but there haven’t been enough data to see how they’re affecting imaging per se. It’s too early to say what the specific impact is, but in general, I think these new technologies are likely to result in reductions in radiation exposure, improved image quality, and higher-resolution acquisition in both the spatial and temporal domains. ImagingBiz.com: How has enterprise visualization evolved to match advances in CT technology? What challenges arise from working with ever-larger datasets, and how can advanced visualization help address these challenges? Rubin: It’s key that advanced visualization be brought to bear as the datasets get larger. These tools are intended to facilitate the rapid and effective navigation of large datasets. What we’ll see more of is attention not just to pure visualization tools, but to more computer vision, where the computer is actually identifying features and then characterizing them, quantifying them, or at least highlighting them. Computer-aided detection, for instance, is a type of computer vision. You’re asking the computer to do more than just display images. It’s hard to predict how these algorithms will evolve, but they will become increasingly important in defining how imagers can look at and manage datasets. ImagingBiz.com: You mentioned earlier that much of this computer-vision technology is not ready for prime time. What needs to change in order to usher in the future that you envision for enterprise visualization? Rubin: Computer vision is complicated stuff. What needs to happen is that in addition to developing an algorithm that’s capable of distinguishing normal from abnormal structures, the software needs to do it in a context where it’s presenting information that’s useful. At the basic level, it should improve efficiency, allowing physicians to be as accurate and efficient as possible without increasing false positives or false negatives. In its most refined state, it would be augmenting what a physician would do without it. That’s a lot more responsibility there. With graphics, here’s a picture, do with it what you will. Once you apply computer vision, there is the risk of the computer making nothing seem like something significant, or eliminating something significant from the display. Much more research and investigation are required. The efficiency side of things is heavily dependent on the implementation and the interface. There’s the algorithm that finds the suspicious area, and then there’s the way the software shows it to the radiologist. A great algorithm could bring value, but if the interface is too cumbersome to use, you eliminate the efficiency advantage. It’s more than just the algorithm. It’s also putting it into a smart user interface. ImagingBiz.com: Stanford has a well known and very busy 3D laboratory. Does the laboratory support radiology workflow, and if not, what kind of access do radiologists have to their own enterprise visualization tools? Rubin: Radiologists all have access to those tools. The 3D laboratory is very valuable for specific reasons, but it does not eliminate the need for radiologists to do primary exploration of the data. The analogy I like to use is that when you consider a typical ultrasound exam, a sonographer examines the patient and records protocol-determined and standardized views of the anatomy. This provides for a consistent output that is reliable for both the radiologists and the referring physicians, but it’s not uncommon that there are abnormalities that are not fully understood by the sonographer. It’s then up to the physician to pick up the probe and explore directly. The same is true of a CT dataset. The data in the workstation are like a virtual patient in the ultrasound suite. The mouse attached to the workstation can be used as a probe to allow exploration. The 3D technologists follow strict protocols to fashion and create standardized views, but the physicians still need to interact directly and to explore the data themselves, to search through and find what might not necessarily be seen or recorded by the technologist.Cat Vasko is editor of ImagingBiz.com and associate editor of Radiology Business Journal.