vRad Structured Reporting: Keeping Eyes on Images

When it comes to dictating radiology reports to produce consistently presented diagnostic evaluations, Benjamin W. Strong, MD (ABR, ABIM), is bullish on two tools: (1) customized speech-to-text software, and (2) flexible diagnostic checklists. In fact, to vRad’s chief medical officer, the evolutionary integration of those two workflow aids—which he has been using himself and refining for vRad’s 500-plus radiologists over the past 10 years—is structured reporting.

For starters, Dr. Strong, something of a structured reporting evangelist, urges all new vRad radiologists to read Atul Gawande’s The Checklist Manifesto. “The more rigorously you adhere to a checklist, the better able your mind is to step outside the list’s elements and evaluate them in relation to one another,” Dr. Strong explains. “The process actually frees the mind for more thorough and professional evaluation while guaranteeing that no simple step along the process is skipped.”

As for voice recognition, he’s pleased to see how widely the profession of radiology has adopted the technology. Unfortunately, though, that’s not the end of that story. “I’m never shocked,” he says with unchecked disappointment in his voice, “to see that so many practices are not using it to its best advantage.” Which is to say they’re mostly using mass-marketed speech-to-text software “as is,” straight off the shelf.

It would seem Dr. Strong has earned the right to be disappointed. During the past 10 years, he has spearheaded the creation of vRad’s custom-built voice recognition system. It now houses a massive library of macros—two- and three-word phrases that trigger the software to fill in a standardized report template based on the 3,000 most frequently uttered sentences in radiology. It has built-in redundancies so that similar words, such as, say, small and mild or renal and kidney, are interchangeable. It uses red type to flag “off-macro” and unusual words and phrases, as well as those that tend to vex referring physicians, for easy review prior to report singing.

And his partners, vRad’s software engineers, have otherwise ironed out vexing voice-recognition usage wrinkles that, in the past, tempted reading radiologists to shift their eyes from the diagnostic images on which they’re reporting in order to check the text screen for accuracy. Encouraging radiologists to remain laser-focused on images throughout the dictation period is the primary purpose of speech-to-text tools. A good solution builds radiologists’ confidence in the tool’s ability to get every word right. Second-rate software has the opposite effect, creating wariness and, with it, compromised mental focus.

Saying everything by saying nothing

Dr. Strong points out that vRad’s take on structured reporting is a direct outgrowth of the practice’s overall approach to streamlining workflow. Structured reports follow a pre-formatted template to automatically condense and standardize radiologists’ dictations, enabling referring physicians to quickly review exam results, triage patients and take action.

“In fact, we refer to our structured reporting simply as ‘the workflow,’ and it’s highly synchronized with our radiology order-management system,” he explains.

Meanwhile, the macros allow the system to take information about a study from multiple sources—DICOM header, technologist’s entry—and automatically populate the header with such information as contrast administration, 3-D reconstructions or coronal and sagittal reformats. In the past, vRad’s radiologists had to repeatedly chant, mantra-like, sentences such as CT scan of the abdomen and pelvis was performed from the diaphragm to the pubic symphysis with 5 mL cuts with administration of intravenous contrast and oral contrast and coronal and sagittal reformats were provided.

“You know what we say now to populate our header information? Nothing!” says Dr. Strong. “What we say at the beginning of every study, in order to set up the template, is absolutely nothing. All of that standard wording is automatically pulled into the system, and as it is purely objective—not interpretive—there is no reason not to do so.”

Allowing that there are times when the macros need to be substantially modified, he says that, around 10% of the time, he does look at the dictation screen. “On occasion, I will insert a measurement, insert a little anatomic specificity. And every now and then, I’ll encounter some heinous tumor for which I just don’t have all the preconfigured sentences, and I will speak directly to it. But over the 10 years that I’ve been doing this, and in the 150,000 studies that I’ve read on our system, I would estimate that only about 5% of the texts I’ve submitted have been direct dictation.”

But about those checklists: Conventional wisdom would conclude that rigid adherence to any sort of script subtracts from any given radiologist’s freedom to apply experienced, human judgment. That conventional wisdom would be mistaken, says Dr. Strong, adding that no two individuals’ search patterns need to be exactly the same.

“You can go in any order,” he says. “The important thing is not that I adhere to a specific order dictated by someone else; it’s that I adhere to the same order in my head every time I look at a specific study type. The inclusion of each of those elements in each radiologist’s search pattern is what leads you to diagnostic accuracy and sufficient reporting uniformity.”

Eyes on the prize: undistracted reading    

The vast majority of radiology reports require at least some customization. This leads many radiologists to resemble tennis fans in what Dr. Strong refers to as the Wimbledon effect. “The radiologist is looking back and forth between two screens, one with the image and the other with the dictation,” he says. “And this makes you revisit things you’ve already completed. You get to the end of the case, you get to the end of that report, and you say, ‘It says the aorta is normal. Gosh, did I look at the aorta? I don’t know.’”

In such times the radiologist often goes back to redo work. “You get signer’s remorse, wondering if you called something normal by default that you didn’t evaluate,” says Dr. Strong. “During all that time that I spent messing around with editing a normal template, were my eyes even on the diagnostic images for an appropriate amount of time?”

Eyes on images and nothing else: the very fruit of vRad’s success with structured reporting.

Best of all, vRad’s solution—a library of sentences that took 18 months to build—is easily intuited and thoroughly redundant.

“Within an hour or two of training and an understanding of the basic rules by which those trigger phrases were created, a radiologist can implement this system of structured reporting right away,” says Dr. Strong. “After a quick training session, most physicians cannot wait. They want to leave and go do it themselves.”

He recalls one radiologist who called him a couple of days after she had transferred vRad’s macro library to her system. “She said, with a chuckle, ‘This is the most intuitive thing—it’s like it’s crawling around in your brain.’”

It really is that efficient and that easily implemented, says Dr. Strong. And also that impressive, as vRad clients who struggle to get a handful of radiologists to follow a common format marvel at vRad’s consistently getting hundreds to do so, 24/7.

Despite the undeniable notability of the unsung achievement, Dr. Strong retains the good humor and sincere humility of the inventor whose brainchild was born of necessity. “I kind of view myself as being like the guy who invented the alarm clock,” he says. “That guy came up with a solution simply because he overslept.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.