Legal considerations for artificial intelligence in radiology and cardiology

In efforts to speed workflow, improve patient care and augment an increasingly shorthanded clinical workforce, artificial intelligence (AI) is starting to see wider adoption across healthcare. This is especially true in medical imaging, which makes up the majority of U.S. Food and Drug Administration (FDA) AI clearances. But there are questions regarding who is liable if an AI application fails and it results in a malpractice lawsuit.

This was largely an academic discussion just a few years ago, but the number of FDA cleared algorithms has exploded in just the past few years. More than 300 of these AI apps have been cleared just since 2019, and that number is expected to grow rapidly into the future. 

There are now more than 520 FDA-cleared AI algorithms as of January 2023. The vast majority, 396, of these are for radiology. Some of the radiology algorithms are for automated detection of disease across many specialities, most notably for include everything from advanced visualization, PACS workflows, specific scanner applications,  

Cardiology, with 58, is the second largest group of FDA-cleared AI algorithms, many of which are imaging specific. But, there are also 18 radiology algorithms that are specific to cardiac imaging. More than 20 others cover CT and ultrasound systems or their reconstruction algorithms that are also used by cardiology. 

From an healthcare IT perspective, this adds new layers of complexity for where things can glitch that can directly impact patient care. So the big question is who gets sued when AI fails?

“The answer is everyone will get sued, it's a big hairy issue," explained Brent Savoie, MD, JD, vice chair for radiology informatics, section chief of cardiovascular imaging, Vanderbilt University. He spoke on this topic at the Society of Cardiovascular Computed Tomography (SCCT) 2022 meeting.

The need for healthcare AI regulatory oversight 

He said this is mainly because there is a not a good regulatory framework for AI in the U.S. at this time. This means there is no guidance on how to deploy the technology safely and there are no clear protections from law suites that say "if you do this" you are not at risk, he explained.

"When you don't have regulations, your default is tort law, and that is a obviously a scary scenario for anyone looking to implement AI technologies, or who may have already implemented it," Savoie said. "I think tort law is a pretty bad mechanism to ensure patient safety."

He said regulatory agencies and quasi-regulatory groups like accreditation bodies offer a more predictable pathway that is more flexible that lawsuits.

"Regulations can really help create standards that people can follow to create an environment that ensures patient safety and reduces error, much more efficiently than tort can do," Savoie explained. 

He outlined that the physician could share responsibility when AI fails, since they are ultimately responsible for diagnosis and reporting. The AI vendor could be liable if the algorithm has a bug, or a built in bias or missing information. The IT department and hospital could be liable if the AI was not updated or there is a glitch caused by interactions with other software on the system. Savoie said the most likely result is everyone gets sued and it will be up to the courts to sort it out.

IT needs to understand AI is not just software, it has clinical impact

He said IT teams also need to understand this software is classified as a medical device, so more care needs to be taken with AI from a liability standpoint, and this is not another PACS or advanced vis program.

"One of the more frightening things is when you start to look at all the places where error can occur, and this is across all medical devices, there are are a lot of points of potential failure. I think most people focus on what do we do when the algorithm breaks, but what they have not focused much on is the other processes. Did you install it correctly? Are you maintaining your servers? Do you have a process to even make sure the AI is still turned on?" He said. 

Outside the algorithm there is a software wrapping that can break just as easily as the Outlook breaks for most people. He said alerts can go off, integrations can fail. So, Savoie said there need to be ways for vendors and the implementing institutions to know what to monitor and test. 

At the technologist level, they might alter scanner settings that may impact the accuracy of the AI without knowing it. The radiologist or cardiologist reading the study may not understand the outputs being created by the AI.  

"At all this points there are sources of error, so if you are suing someone because something went wrong related to the AI algorithm, the easiest thing to do is just name everybody and then sort it out," Savoie said. "It is sort of a worst case scenario and hopefully will not happen, but if there is no structured legal environment, that's what it devolves to."

He said laws might even be similar between states, but juries that hear these case might be significantly different from place to place. So if there are not specific regulatory protections for an AI vendor or for a healthcare groups using the AI, they may think twice about implementing the AI or doing business in that location. 

"This is all kind of a downer message I feel like I am giving, but I really am optimistic about the future of AI and I think creating these processes is essential for preserving that future, Savoie explained. "If you are at an institution and install something and you face an adverse outcome and it comes to litigation, it's going to be really difficult to get approval for a budget request for the next application."

AI algorithms are FDA-regulated medical devices 

IT departments will need to make sure the AI apps are updated and possibly validated after each new upgrade, but many times they departments are already strapped for resources.

"In radiology, hospitals are used to relatively low-risk, post-processing applications, so if your 3D advanced visualization software is not working or it is glitchy, you can probably find a work around that will not have a major impact on patient care. So when IT is prioritizing what applications to patch of update, its usually the bigger applications that will have a bigger impact on care. The smaller applications are usually at the bottom of that list, and that is a serious problem with AI applications that may have immediate clinical impact," stressed. 

He said it is important that the clinical teams communicate to IT that the AI is not a one-off research app, and they need to place some of these AI apps in a higher priority bucket. While a piece of software may be viewed with the same patient seriousness as an implantable medical device, AI apps also go through an FDA review process the same as those devices, he said. From that standpoint, he said this is where health system IT teams will become an important, active participant on the clinical side of things.

Savoie said it needs to be stressed AI applications are not just another imaging system software that is not FDA regulated, which is what most IT departments are more accustomed to.

Issues with bias in AI algorithms 

Another issue to consider with AI is potential bias the algorithm may have depending on how it was programs, or even by the data sets it was trained on. There can be variability in clinical presentations between patients of different race and ethnicity. If an AI is trained using a data set made up only of population of white men, it is possible it will miss the differences in that disease presentation in Blacks, women or Asians. 

"There are a lot of FDA-cleared algorithms out there and that discussion is still ongoing. Think we are going to learn a lot of lessons pretty rapidly," he said.

Savoie suggests having a rigorous monitoring program for AI to watch for biases that may show up as performance issues when the algorithm is deployed. Some people may say this sounds like a lot of work, but he points out that quality assurance (QA) checks are already done on the other medical devices used in imaging. 

"You wouldn't not do QA on on your CT scanner, and you do QA on your monitors, so why wouldn't you do QA on this?" Savoie asked. 

AI needs to be trusted in many medical applications because there is no human substitute

Some clinicians and managers may view AI and a nice versus necessary device. However, AI already is showing it can do things to improve care on a daily basis. Savoie said lung nodule evaluation and CT calcium scoring are great examples. In calcium scoring, he said the AI applications can automatically segment all the coronary vessels, then measures and calculates all the deposits of calcium in a couple seconds for a score for each artery and an overall score. This process could be done manually, but it is so time consuming no one bothers doing it.

"I can't manually perform a calcium scan or manually do a characterization of a lung nodule. I can give a guesstimate," he said, but the accuracy of the AI is better.

Radiologists do look at these AI generated findings and do a quick visual assessment to see if the AI outputs match roughly with what they see on the CT scan. 

He said the danger of AI is that someone might just take the AI output on face value and not double check it, or there might not be any easy way to verify what the AI is seeing. This is especially true with detailed, 3D measurements that evaluate the entire volume of a lesion or plaque, rather than just measuring a sliver of one 2D cross section. 

This also may become impossible to check as deep learning algorithms start checking for radiomic signatures using hundreds or thousands of data points in an image to assess risk scores and disease prediction that human eyes cannot comprehend. In some cases with the latest deep-learning, self-taught algorithms, developers are still trying to figure out what their AI is looking at, even though the AI predictions in testing are consistently correct. 

Related AI liability reading from peer reviewed journals:

Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety StatementRadiology

Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy - American Journal of Roentgenology

When Artificial Intelligence Models Surpass Physician Performance: Medical Malpractice Liability in an Era of Advanced Artificial IntelligenceJournal of the American College of Radiology

Potential Liability for Physicians Using Artificial Intelligence - Journal of the American Medical Association

AI Health Care Liability: From Research Trials to Court TrialsAmerican Health Lawyers Association

The Deep Dive: Ethical and legal challenges of artificial intelligence in cardiologyAI Med

The future of artificial intelligence in medicine: Medical-legal considerations for health leadersCanadian College of Healthcare Leaders

Ethical and Legal Challenges of Artificial Intelligence in CardiologyAI Med 

Intersection of artificial intelligence and medicine: tort liability in the technological age - Journal of Medical Artificial Intelligence

Legal Issues Raised by Medical AI: An Introductory ExplorationAmerican Bar Association

Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular ImagingFrontiers in Cardiovascular Medicine

Artificial Intelligence in Cardiovascular Imaging: “Unexplainable” Legal and Ethical Challenges?Canadian Journal of Cardiology

Dave Fornell is a digital editor with Cardiovascular Business and Radiology Business magazines. He has been covering healthcare for more than 16 years.

Dave Fornell has covered healthcare for more than 17 years, with a focus in cardiology and radiology. Fornell is a 5-time winner of a Jesse H. Neal Award, the most prestigious editorial honors in the field of specialized journalism. The wins included best technical content, best use of social media and best COVID-19 coverage. Fornell was also a three-time Neal finalist for best range of work by a single author. He produces more than 100 editorial videos each year, most of them interviews with key opinion leaders in medicine. He also writes technical articles, covers key trends, conducts video hospital site visits, and is very involved with social media. E-mail: dfornell@innovatehealthcare.com

Around the web

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.

The all-in-one Omni Legend PET/CT scanner is now being manufactured in a new production facility in Waukesha, Wisconsin.