Radiologists warn of cybersecurity risks stemming from large language model use
Radiologists are warning of cybersecurity risks that come with the deployment of large language models (LLMs) such as ChatGPT in clinical practice.
Members of the specialty shared their advice in a special report published Wednesday by the Radiological Society of North America’s scientific journal Radiology: Artificial Intelligence. Rads have used these AI tools—which can generate humanlike text and perform various language-processing tasks—for everything from simplifying recall letters to assisting with decision-making.
However, experts are urging peers to carefully assess possible cybersecurity risks before deploying LLMs in medical imaging. Such AI models are susceptible to hackers, who can extract sensitive patient data, manipulate information or alter outcomes through “data poisoning.”
"The landscape is changing, and the potential for vulnerability might grow when LLMs are integrated into hospital systems," lead author Tugba Akinci D'Antonoli, MD, a neuroradiology fellow with University Hospital Basell, Switzerland, said in a May 14 announcement from RSNA. “That said, we are not standing still. There is increasing awareness, stronger regulations and active investment in cybersecurity infrastructure. So, while patients should stay informed, they can also be reassured that these risks are being taken seriously, and steps are being taken to protect their data."
A team of researchers from several institutions and multiple countries helped assemble the report, hoping to “equip professionals with strategies to mitigate these threats for safe use.” They note that some vulnerabilities could include adding intentionally wrong or malicious info into an AI model’s training data or bypassing internal security protocols. Attacks can potentially lead to “severe” breaches, with data manipulated or service disrupted.
"Radiologists can take several measures to protect themselves from cyberattacks," D'Antonoli said. "There are of course well-known strategies, like using strong passwords, enabling multi-factor authentication, and making sure all software is kept up to date with security patches. But because we are dealing with sensitive patient data, the stakes (as well as security requirements) are higher in healthcare."
Safe integration of this technology in radiology requires a secure deployment environment, strong encryption and continuous monitoring. It’s also important to only use tools that have been vetted and approved by the institution’s IT department, with sensitive info anonymized to protect patients.
“Moreover, ongoing training about cybersecurity is important," D'Antonoli added. "Just like we undergo regular radiation protection training in radiology, hospitals should implement routine cybersecurity training to keep everyone informed and prepared."
You can read much more about their guidance in RSNA’s Radiology: AI here.