5 Steps Radiologists Can Take Today to Improve Reports

For all its high-tech gadgets, tools, prompts, aids and reminders, the modern radiology report really isn’t all that different from the first of its kind, rendered as a longhand note.

“The X ray shows plainly that there is no stone of an appreciable size in the kidney,” reported Dr. William Morton of the New York Post-Graduate Medical School and Hospital to Dr. Leopold Stieglitz, a Park Avenue physician, in 1896. “I only got the negative today and could not therefore report earlier. The picture is not so strong as I would like, but it is strong enough to differentiate the parts.”

There you had it—workflow problems, along with a bit of a hedge, all the way back in the 19th century. With the handwritten note projected before the audience, Curtis P. Langlotz, MD, PhD, professor of radiology and biomedical informatics at Stanford, wryly noted: “Some things never change.”

The occasion was the California Radiology Society’s 2015 annual meeting and leadership summit in early October. Langlotz highlighted the historic moment by way of introducing select material from his recently published book, The Radiology Report: A Guide to Thoughtful Communication for Radiologists and Other Medical Professionals.

“Why are our reports so similar to those very early ones? Because it actually is very convenient for radiologists to pick up a microphone and describe what they see,” he said. “But other forces are starting to counterbalance and drive changes into the radiology report.”

Those forces include not only referring physicians fed up with inconsistent nomenclature but also payors and practice managers demanding details like radiation dose, the Joint Commission requiring notifications of critical results, CMS assessing incentives (or penalties) for quality—and, increasingly, patients expecting full and meaningful transparency in every aspect of their care.  

Stressing the timeless importance of standardization across various aspects of the radiology report, Langlotz pivoted from the past to the present and then on to the future. He walked attendees through several steps that radiologists—especially those currently using speech recognition systems and doing predominantly narrative reporting—can take to make sure tomorrow’s reports are better than today’s. 

1. Adopt and implement standard templates.

Langlotz displayed a standardized template designed by his Stanford colleague David Larson, MD, Langlotz said one of the template’s key attributes is its accountability to a formal governance structure.

“You want to have the leadership on board,” Langlotz said, adding that Larson’s governance called for forming a committee to set rules, create a style guide with checklist and audit for adherence.

“He made it easy to do the right thing,” Langlotz said. “Once these templates were decided upon, they came up automatically as a default setting in our reporting system. You could change them if you wanted to, but it was easy to work with what came up first. We actually had very good adherence.”

For those who have run into snags coming up with templates, or gaining buy-in on them, Langlotz suggested turning to RSNA’s RadReport.org. There, an open template library facilitates open uploading as well as downloading of templates, while the Report Template Library functions as an authoritative “voice from on high” guide. It’s multilingual and its templates have been viewed or downloaded more than 2.3 million times.

2. Create phrase lists with trigger words.

Langlotz described a reporting tool that brings up a set of sentences when certain words are dictated. “As I am reporting, I am heavily using the fast-forward and rewind buttons to jump between fields, and I’m using these words to have certain phrases come in automatically,” he explained.

Emphasizing the need to include standard macros for regulatory and billing compliance, he pointed to “presence attestation for a procedure” as an example, noting that it’s required by many payors.

As notification of critical results is required by the Joint Commission, creating phrase lists with trigger words “makes it very easy for you to audit your reports when you are done,” Langlotz said. “Then you use this standard template to detect whether there has been a notification. Thus, you can get a dashboard in real time to show how many of the critical results have a documented notification. You can go back and make sure that you addend those reports to make sure that someone is aware of those critical results.”

3. Document trainee discrepancies.

Stanford is home to a large residency/fellowship program, and its emergency department is keenly interested to know not only how radiology reports change due to discrepancies but also how often. Langlotz’s team has developed a set of macros to track trainee discrepancies as follows:

  • Macro Agree: There are no substantial differences between the preliminary results and the impressions in this final report.
  • Macro Minor Change: Preliminary results were reviewed and minor modifications were made in this final report as follows: [  ]
  • Macro Significant Change: Preliminary results were reviewed and modified in this final report as follows: [  ]
    [  ] was notified by [  ] of the modification on [  ] at [  ].

“We can pull these out using some fairly rudimentary processing because this is standard wording here (at Stanford),” said Langlotz. “And we can take a look at how our rates have changed. If you are overreading someone, and you want to know what your rates are, you can use a program like this with some analytics layered on top and get some good information on the back end.”

4. Provide assessment categories.

Here, Langlotz pointed to what he called “the frontier,” describing an adaptation of BI-RADS numerical categorizations for breast cancer imaging that he and Tessa Cook, MD, and Hanna Zafar, MD, developed for other-than-breast imaging at the University of Pennsylvania.  

“I know there’s some labor associated with BI-RADS and the regulations around it,” he said. “But the advantage with these kinds of global assessment categories is that—ultimately, over time—you get a global definition of what each assessment means.”

Where radiology now knows which breast imaging features correlate with a BI-RADS 3 or a BI-RADS 4, so the profession can begin to learn the same kinds of things for other organs, Langlotz said.

“We can learn how often a rating of 3 in the liver or the pancreas results in a cancer. And we can provide benchmarks for callback rates and cancer rates and all of the things we do for radiologists that help us learn,” he added, including which radiologists are on target for callback rates, which ones need more training and which ones should be emulated.

Thanks to BI-RADS, it is fairly easy to help an individual “learn how to be a good breast radiologist because we have this very clear information in the literature,” Langlotz said.

“Yes, it does take a little bit of extra time,” he said. “But the payoff with our clinicians (at Penn Medicine) was tremendous. We involved them in the design of the ordering [component], and we also layered on top of it an automated program to look at the follow-up that we are recommending. They love this.”

5. Use standard exam codes.

“This is probably something that you don’t want to take on unless you are changing your RIS or doing something else major,” Langlotz said of standardizing exam codes. “But it really was helpful for us [at UPenn].”

He told how, after acquiring several hospitals in the Philadelphia area, Penn Medicine found the use of standard exam codes hugely helpful in unifying clinical and business practices across all hospitals in the system.

Referencing RSNA’s RadLex Playbook, Langlotz predicted this component of the RadLex ontology is about two years away from becoming a national standard. In the meantime, he noted, numerous resources are available at playbook.RadLex.org.

Returning the discussion to its starting point, Langlotz displayed a letter sent by radiologist Morton to clinician Stieglitz a few weeks after that original report. Morton wrote:

“In regard to a proper charge to make to your patient, I find it difficult to decide and I am most willing to be guided to a great extent by you. My usual charge to radiograph through the entire body is $100. If we had found the stone in the kidney it would have been worth that money, but we didn’t. I think therefore it will be fair to say $75.”

“There’s another thing that hasn’t changed,” said Langlotz. “There is clearly a business behind [radiology]. We know business considerations are changing. Today many of us are still working in a fee-for-service environment, but ACR is talking about Imaging 3.0, and clearly the environment is going to change. I want to leave you with this question.

“What is the role of the radiology report in a post fee-for-service world? What if we no longer had to do one study, one report? What if we could make a brief comment on all those ICU radiographs only when there was something that needed to be said—as opposed to having to generate that narrative every day? What about if we only documented the change? What are the other areas within a report that can really change into something completely different if we no longer needed it as our artifact to say, ‘Yes we did it; please pay.’”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.