Marooned on Level 3: Leverage IT to Improve Reporting
In a December 2 session at RSNA 2009 in Chicago, Illinois, on using next-generation health care IT to improve radiology, David Avrin, MD, PhD, radiologist at the University of California–San Francisco Medical Center, opened with a comment made to him by one of his hospital administrators: “Images these days are so clear that even I can read them.” Of course, no patient would want a hospital executive reading his or her images instead of a radiologist, but the comment underscores the theme of Avrin’s talk: with so many improvements in health care IT both currently available and on the horizon, there is no excuse not to leverage new tools for improvements in radiology quality and safety.
David Avrin, MD, PhD Avrin began by discussing electronic medical records (EMRs). He describes the first generation of EMRs as data collectors, the second as documenters, and the third as helpers. He believes that most EMRs today are still marooned on the third level, acting as aids to clinicians without reaching the currently accessible level of the fourth generation: the partner. “The fourth-generation EMRs are advanced systems that provide more decision-support capabilities, and that are operational and accessible across the continuum of care,” he says. In the future, he looks for an even more advanced generation that he calls the mentor: complex and fully integrated systems that include all previous capabilities, but are also a main source of decision support, guiding care for both clinicians and consumers. The issue, in Avrin’s view, is that most EMRs currently lack the federated ability to merge information from disparate databases, creating a comprehensive picture of a patient’s clinical background and current needs. Those data must be accessible as a whole in order to enable users to perform data mining and analytics, which will show where clinicians and staff can improve their processes with regard to quality and safety. Complete access is also vital to improving radiologists’ clinical confidence and efficiency. “In order for them to interpret well, I train our residents to make a clear distinction between our findings versus their impressions. That requires access to prior exams, prior reports, and the clinical indication for the current study—and (the biggest problem in our environment) it requires identifying the ordering clinician for that round-trip confirmation of the findings,” Avrin says. Workstation Wish List Avrin discusses two workstation-level IT enhancements that he hopes to see become widespread in the near future. The first is decision support for radiologists. “Most people see five to ten studies a day where they’re uncertain what they’re looking at,” he says. “A lot of people in medicine and radiology use Google and Yahoo. Wikipedia has become very popular, but there, you get the expert controversy—you don’t know the author’s credibility.” Avrin recommends, instead, using a commercial product that delivers vetted electronic content via subscription, or using a free resource tailored to radiology, such as ARRS GoldMiner® or Yottalook™. The other enhancement that Avrin touches on is converting radiologists from conventional dictation to voice recognition. “This can be controversial,” he warns. “Most private practices are extremely resistant to voice recognition for two reasons—it’s slower, and private-practice radiologists are often offended to be doing work that used to be done by a hospital-paid transcriptionist.” Avrin says that radiologists need to change their perspectives, especially with the field under increased scrutiny, thanks to its ever-escalating costs. He says, “What you bring to the process is the quality and timeliness of your reading, and the only way to improve that is to use voice recognition.” Paired with an ontology such as RadLex® (developed by the RSNA and several professional societies) and integrated bidirectionally with PACS, voice recognition can enable users to perform the kind of data mining that pioneers of next-generation IT advocate, while allowing a voice worklist to drive the PACS for improved radiologist efficiency. For the skeptical, Avrin reports that at UCSF, where this integration has been implemented, turnaround times average less than seven hours, with voice-recognition use at 72% (and rising). He has a little trick for those wary of using voice recognition: “Don’t look at the screen during free dictation—just check over what you have at the end,” he says. Then, all that remains to be done is to sign the report and release it, with accompanying images, to the enterprise. “Our value addition, as radiologists, is our ability, in the era of PACS, to return an interpretation with images quickly,” Avrin says. “You cannot improve quality and safety without informatics today, and tools do exist to level the quality of physician ordering and radiologist interpretation.”Cat Vasko is editor of Radinformatics and ImagingBiz.com.
David Avrin, MD, PhD Avrin began by discussing electronic medical records (EMRs). He describes the first generation of EMRs as data collectors, the second as documenters, and the third as helpers. He believes that most EMRs today are still marooned on the third level, acting as aids to clinicians without reaching the currently accessible level of the fourth generation: the partner. “The fourth-generation EMRs are advanced systems that provide more decision-support capabilities, and that are operational and accessible across the continuum of care,” he says. In the future, he looks for an even more advanced generation that he calls the mentor: complex and fully integrated systems that include all previous capabilities, but are also a main source of decision support, guiding care for both clinicians and consumers. The issue, in Avrin’s view, is that most EMRs currently lack the federated ability to merge information from disparate databases, creating a comprehensive picture of a patient’s clinical background and current needs. Those data must be accessible as a whole in order to enable users to perform data mining and analytics, which will show where clinicians and staff can improve their processes with regard to quality and safety. Complete access is also vital to improving radiologists’ clinical confidence and efficiency. “In order for them to interpret well, I train our residents to make a clear distinction between our findings versus their impressions. That requires access to prior exams, prior reports, and the clinical indication for the current study—and (the biggest problem in our environment) it requires identifying the ordering clinician for that round-trip confirmation of the findings,” Avrin says. Workstation Wish List Avrin discusses two workstation-level IT enhancements that he hopes to see become widespread in the near future. The first is decision support for radiologists. “Most people see five to ten studies a day where they’re uncertain what they’re looking at,” he says. “A lot of people in medicine and radiology use Google and Yahoo. Wikipedia has become very popular, but there, you get the expert controversy—you don’t know the author’s credibility.” Avrin recommends, instead, using a commercial product that delivers vetted electronic content via subscription, or using a free resource tailored to radiology, such as ARRS GoldMiner® or Yottalook™. The other enhancement that Avrin touches on is converting radiologists from conventional dictation to voice recognition. “This can be controversial,” he warns. “Most private practices are extremely resistant to voice recognition for two reasons—it’s slower, and private-practice radiologists are often offended to be doing work that used to be done by a hospital-paid transcriptionist.” Avrin says that radiologists need to change their perspectives, especially with the field under increased scrutiny, thanks to its ever-escalating costs. He says, “What you bring to the process is the quality and timeliness of your reading, and the only way to improve that is to use voice recognition.” Paired with an ontology such as RadLex® (developed by the RSNA and several professional societies) and integrated bidirectionally with PACS, voice recognition can enable users to perform the kind of data mining that pioneers of next-generation IT advocate, while allowing a voice worklist to drive the PACS for improved radiologist efficiency. For the skeptical, Avrin reports that at UCSF, where this integration has been implemented, turnaround times average less than seven hours, with voice-recognition use at 72% (and rising). He has a little trick for those wary of using voice recognition: “Don’t look at the screen during free dictation—just check over what you have at the end,” he says. Then, all that remains to be done is to sign the report and release it, with accompanying images, to the enterprise. “Our value addition, as radiologists, is our ability, in the era of PACS, to return an interpretation with images quickly,” Avrin says. “You cannot improve quality and safety without informatics today, and tools do exist to level the quality of physician ordering and radiologist interpretation.”Cat Vasko is editor of Radinformatics and ImagingBiz.com.