Q&A: vRad’s Shannon Werb and Benjamin W. Strong, MD, on Developing a Practical AI Strategy and the Future of Radiology

To the surprise of absolutely no one, artificial intelligence (AI) and deep learning technologies were the talk of RSNA 2017 in Chicago. More than 70 sessions were devoted to AI, and you couldn’t walk ten feet without reading or hearing the words “machine learning” or “algorithm.” Shannon Werb, vRad’s president and chief operating officer, and Dr. Benjamin Strong, vRad’s chief medical officer (CMO), sat down with ImagingBiz at RSNA to discuss their organization’s stance on these evolving technologies and what vRad and its parent company, MEDNAX, have planned for 2018 and beyond.

ImagingBiz: As we’re sitting here at RSNA, everyone is obviously focused on AI. How do you see AI unfolding in the next few years? What is vRad working on in this area?

Werb: Three years ago I don’t think we were even talking about AI at RSNA. But by last year, there was significant hype about it at the show and a lot of that had to do with fear. “We’re building a computerized doctor,” “We’re not going to need radiologists,” and so on.

At vRad, we are focusing our AI efforts on helping doctors do a better job by removing obstacles and elevating their reading experience. Our goal is to enable our physicians to read “in the zone” where their technical efficiency and interpretive accuracy are maximized. Our strategy is to take smaller bites of the apple instead of going after big, lofty visions. We worked quite a bit this year on learning where vRad and MEDNAX can bring value that leads to better patient outcomes through AI.

One area we’re focused on—and this is something I’m starting to hear from others working on AI as well—is figuring out how to curate the enormous amount of data involved to make sure it’s ready to train our algorithms to do something real. This will allow us to move forward with AI and focus on radiology practice efficiency in many key areas.

Strong: I agree with everything Shannon just said. The hype around AI was way overblown. We have no interest right now in building a radiologist out of a computer. We’ve recognized that there are some very useful aspects of present-day AI that come down to answering basic binary questions. There are a variety of things you can do with that information—certain measurements and so on—and that’s what we feel is possible today. Ultimately, won’t the radiologist be augmented by a “computerized assistant” who is an aggregation of those binary questions? Again, it’s about taking smaller bites of the apple.

Today, we’re actually less concerned with the specificity or sensitivity of any given algorithm because of the way we plan to use them. We’re a massive teleradiology practice that reads millions of studies a year, so we have a particular need to prioritize a queue of studies and expedite those that involve life-threatening conditions to the specialist best suited to interpret the study. If we can get that assessment as soon as a study hits our list, saving us an average of 15 minutes, that has a monumental impact on patient care. That’s what we’re focused on, building a list of life-threatening conditions to expedite should we learn they are present in a study. This includes things like pulmonary embolism and acute stroke.

I really feel like we’re doing the practical thing with AI here at vRad. We don’t have a head-in-the-clouds goal about replacing radiologists; we want to build something that helps our radiologists right now.

It’s been 2 and a half years since MEDNAX acquired vRad and this year alone, four major radiology practices have joined the organization. What’s in store for MEDNAX Radiology Solutions in 2018? What role is vRad playing those developments?

Werb: Those recent additions have allowed us to take what vRad brings to the table—virtual, remote technology—to practices that have on-the-ground relationships and are present in hospitals. MEDNAX isn’t just building one large practice; we’re bringing together the best practices in the country, putting them on the same technology platform and will enhance them with things like AI. Combining rather than replacing enables us to optimize the greatest capabilities of each of these players in their own markets. Our group in south Florida, for example, is a leader in cardiac imaging. With the common vRad platform, it’s now possible for that group’s expertise to be available to the other groups that have joined MEDNAX, so that the best doctors are looking at those images.

To date, almost all of vRad’s growth has been from our core business with only a small amount related to collaborations with these new practices. Looking forward, I expect that these collaborations will be an increasing contributor to our growth.

Strong: When I look at other vendors in this space, I don’t see them going in that direction with that emphasis. I think that is MEDNAX’s selling point and it’s the thing that is going to set them apart.

Dr. Strong, as vRad’s CMO, you have very hands-on involvement with new innovations at vRad. What are you working on for 2018?

Strong: I want to tout natural language processing (NLP) and expression recognition. If I had to choose between NLP and image detection, I would choose to further develop NLP. It all goes back to being practical and of all the things I can do today to improve patient care and the radiologist’s experience, NLP takes the cake. It enables research, it enables structured reporting, it enables quality assurance. You can use it to find patient records, prior reports, admission notes and many other things of that nature and present them in bulleted form. There is so much that can come from NLP once you build that basic capability with appropriate security. For example, one of the ways we’re using NLP at vRad is to automatically initiate a critical findings call to the ordering physician based on the radiologist’s dictation.

That’s the thing I’d like to see vRad just nail. We’ve already developed NLP with different vendors and different applications, but we want to consolidate that experience into a single NLP engine that can be applied to the different venues where we see the need.

Is there anything specific that has grabbed your attention this year at RSNA?

Werb: The thing that caught my attention this year—and this goes back to what we were saying earlier—is the practical conversation between vendors and doctors about how AI is not going to be actually replacing radiologists.

This year, organizations are trying to take smaller steps, and that’s what they should be doing.

Strong: What has impressed me the most is the ability of people to succeed in the particular sphere of AI despite a lack of serious infrastructure. I’ve seen real-time demonstrations of real algorithms from organizations we are working with and it is spectacular. What I’m taking away from all of this is that two guys in a garage, when they have our enablement, can do this just as easily as mega corporations. I believed that was true last year, but I know it this year.

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.