Penn implements automated follow-up tracker to impressive results
Ensuring patients receive appropriate follow-up imaging after suspicious findings on mammography or other cancer screening is notoriously difficult: about one-third of US women who are surgically treated for breast cancer never undergo their follow-up, slipping through the cracks. Faculty and researchers from the Perelman School of Medicine at the University of Pennsylvania recognized the need for a reliable monitoring system to guarantee patients receive their clinically-indicated follow-up, creating an automated recommendation-tracking program to identify patients with suspicious lesions on abdominal organs and notify the appropriate care providers.
Led by Assistant Professor of Radiology Hanna M. Zafar, MD, the group published their experiences creating and validating the system in the Journal of the American College of Radiology.
Outside of mammography, there are few systems that consistently and accurately report follow-up recommendations, the authors said. While existing products allow radiologists to manually flag patients and receive alerts when follow-up imaging is performed, Zafar and her team desired more automation.
The development process began with implementing a standardized lexicon for reporting lesions in the liver, pancreas, kidneys and adrenal glands, using a set of codes covering disease states including “indeterminate,” “suspicious,” and “known cancer.” In addition, Penn researchers made reporting templates for most abdominal CT, MR, and certain ultrasound exams. This uniformity allowed the researchers to build modules for data mining, compliance tracking, and the killer app: follow-up monitoring.
The data mining module parses each report for organ specific codes, enabling the engine to identify patients for follow-up and store the data behind the institutional firewall—complying with Health Insurance Portability and Accountability Act (HIPAA) regulations.
The compliance tracking module searches reports to ensure the organs are coded correctly and the recommended exam(s) are clinically relevant to original findings. An email is sent to the reporting radiologist if discrepancies are found, with a short grace period if the report was co-authored by a trainee.
“For reports coauthored by a trainee, the first e-mail notification is sent to the trainee who has 3 days to issue an addendum before the faculty member is notified,” they wrote. “Faculty members have 7 days from e-mail notification to issue an addendum before the case is forwarded to the section chief and ultimately to the department chair.”
Finally, the follow-up module checks patient records daily to see if follow-up has been completed, with designated follow-up coordinators who periodically review patients remaining in the system.
Two radiologists reviewed cases added to the database during the first month after going live, ensuring findings were reported appropriately and lesions that didn’t require follow-up weren’t caught in the program’s net.
The recommendation-tracker produced outstanding results, according to Zafar et al. Over the course of a year—July 2013 to June 2014—28,000 exams from 19,000 patients were added to the database. By November 2014, 43 percent of these patients had undergone follow-up imaging with a data mining accuracy of over 90%.
Other institutions could learn from this experience, the authors said, laying out a blueprint for other imaging providers to follow.
“To implement its own follow-up monitoring system in a similar fashion, a radiology practice would have to develop a standardized template that would capture discrete follow-up recommendations,” they wrote. “The practice would need a means to capture reports using this template (for example, using either commercial report-search products or an in-house solution such as ours) and identify patients who receive follow-up recommendations on index imaging studies and undergo subsequent follow-up imaging.”
Although the process is mostly automated, Zafar et al cautions potential adopters that some modicum of human effort will be required—mostly in validation and verification. In addition, this basic framework could be extended to non-imaging follow up.
While excited about the possibility of additional real-world testing, Zafar et al’s team is driven by their shared desire to improve patient care, according to the article.
“We are actively collaborating with other sites to implement our system and study its impact on patient care outside of a tertiary academic medical center,” they wrote. “The success of a multisite collaboration would enable more careful monitoring of patients in the real-world setting, where care is often delivered at multiple sites and records may not be easily exchanged.”