Radiologists warn of cybersecurity risks stemming from large language model use

Radiologists are warning of cybersecurity risks that come with the deployment of large language models (LLMs) such as ChatGPT in clinical practice. 

Members of the specialty shared their advice in a special report published Wednesday by the Radiological Society of North America’s scientific journal Radiology: Artificial Intelligence. Rads have used these AI tools—which can generate humanlike text and perform various language-processing tasks—for everything from simplifying recall letters to assisting with decision-making.

However, experts are urging peers to carefully assess possible cybersecurity risks before deploying LLMs in medical imaging. Such AI models are susceptible to hackers, who can extract sensitive patient data, manipulate information or alter outcomes through “data poisoning.” 

"The landscape is changing, and the potential for vulnerability might grow when LLMs are integrated into hospital systems," lead author Tugba Akinci D'Antonoli, MD, a neuroradiology fellow with University Hospital Basell, Switzerland, said in a May 14 announcement from RSNA. “That said, we are not standing still. There is increasing awareness, stronger regulations and active investment in cybersecurity infrastructure. So, while patients should stay informed, they can also be reassured that these risks are being taken seriously, and steps are being taken to protect their data."

A team of researchers from several institutions and multiple countries helped assemble the report, hoping to “equip professionals with strategies to mitigate these threats for safe use.” They note that some vulnerabilities could include adding intentionally wrong or malicious info into an AI model’s training data or bypassing internal security protocols. Attacks can potentially lead to “severe” breaches, with data manipulated or service disrupted. 

"Radiologists can take several measures to protect themselves from cyberattacks," D'Antonoli said. "There are of course well-known strategies, like using strong passwords, enabling multi-factor authentication, and making sure all software is kept up to date with security patches. But because we are dealing with sensitive patient data, the stakes (as well as security requirements) are higher in healthcare."

Safe integration of this technology in radiology requires a secure deployment environment, strong encryption and continuous monitoring. It’s also important to only use tools that have been vetted and approved by the institution’s IT department, with sensitive info anonymized to protect patients. 

“Moreover, ongoing training about cybersecurity is important," D'Antonoli added. "Just like we undergo regular radiation protection training in radiology, hospitals should implement routine cybersecurity training to keep everyone informed and prepared."

You can read much more about their guidance in RSNA’s Radiology: AI here.

Marty Stempniak

Marty Stempniak has covered healthcare since 2012, with his byline appearing in the American Hospital Association's member magazine, Modern Healthcare and McKnight's. Prior to that, he wrote about village government and local business for his hometown newspaper in Oak Park, Illinois. He won a Peter Lisagor and Gold EXCEL awards in 2017 for his coverage of the opioid epidemic. 

Around the web

The new F-18 flurpiridaz radiotracer is expected to help drive cardiac PET growth, but it requires waiting between rest and stress scans. Software from MultiFunctional Imaging can help care teams combat that problem.

News of an incident is a stark reminder that healthcare workers and patients aren’t the only ones who need to be aware around MRI suites.

The ACR hopes these changes, including the addition of diagnostic performance feedback, will help reduce the number of patients with incidental nodules lost to follow-up each year.