What will be the short- and long-term effects of AI on radiology training programs?
As the influence of artificial intelligence (AI) and machine learning (ML) continues to spread throughout medical imaging, radiology training programs may need to update their curricula and prepare for both the short- and the long-term effects of these new technologies, according to a new commentary published in Academic Radiology.
“Although there has been much discussion in the lay press about the role of ML-based AI tools in radiology—including proclamations that we should stop training radiologists—both AI/ML and medical imaging experts predict that these new software tools will be central to radiologists' practice across the research, clinical, and education domains,” wrote authors Shahein H. Tajmir, MD, and Tarik K. Alkasab, MD, PhD, of Massachusetts General Hospital and Harvard Medical School.
In the short term, Tajmir and Alkasab explained, AI and ML will be used to “predictate and preanalyze” examinations, taking on tedious tasks such as measuring nodules. Training programs will need to prepare trainees for this reality, teaching them how to work with these tools and take advantage of what they can do.
“Radiologists are taking on a new responsibility on top of image acquisition and interpretation: active supervision of AI tools,” the authors wrote. “Therefore, trainees will also need to learn how to step into that role.”
Training programs will also need to decide exactly when AI tools should be introduced to trainees. Is it too much for first-year residents? Or should AI be introduced as early as possible? In addition, the authors add, trainees will need to learn how to recognize when these technologies aren’t working correctly.
“Perhaps standardized teaching cases where the AI tools perform well and poorly will be developed to assess a resident's ability to know when to leverage an AI tool's output and when to discard them because of failure,” Tajmir and Alkasab wrote.
In the long term, training programs could be completely changed by these technologies. For instance, the authors noted, there could be a future where radiology trainees and attending radiologists have their work and their interactions monitored by computers looking for opportunities to correct reading errors and improve efficiency. AI could also be used to select which cases trainees should read, maybe even creating its own cases to test certain aspects of the trainees’ abilities.
It’s also easy to imagine a future where AI is used to alert educators when a trainee is in need of assistance. Is the trainee struggling with a specific read but afraid to speak up? The AI could note this and address it as needed.
Of course, the authors add, these technologies have the potential to go too far.
“Clearly, to be acceptable, these kinds of functionality will have to be carefully integrated and monitored to avoid a sense of overbearing or intrusiveness,” the authors wrote. “One hopes an ability to adjust a tool to an appropriate level of watchfulness given the experience level of the trainee and the comfort level of the attending will hopefully allow for improvements in patient care without disrupting the development of trainees.”
ML-trained natural language processing could also help training programs provide trainees with feedback. How did a resident’s preliminary report differ from the attending radiologist’s final report? What mistakes were made? This could be communicated directly to the trainee, helping them learn from their mistakes.