Should AI-based imaging tools guide treatment decisions?

Artificial intelligence in imaging comes with many potential benefits—but it also comes with varying levels of uncertainty. So who should decide the extent to which radiologists will use AI-based imaging tools to make clinical decisions?

The first step in answering that question, according to Abhinav Jha, PhD, of the Washington University of St. Louis's (WUSTL) Kelvey School of Engineering, is to develop a framework to quantify the uncertainty of AI-based imaging methods. Now, with the help of a new $314,807 grant from the National Institute of Biomedical Imaging and Bioengineering—part of the National Institutes of Health—Jha will lead a research project to do just that. 

Quantifying uncertainty of AI-based imaging methods

As an assistant professor of biomedical engineering, Jha clearly has an appreciation for what automation can offer. But he's also acutely aware of the ethical questions that new tools can raise.  

"There is strong interest in my group and others in developing AI-based methods for imaging. However, for clinical translation, we need to address ethical issues surrounding how to model uncertainty of AI algorithms," Jha told Radiology Business in an email interview. "In this project, we will be addressing this important need by developing a framework to quantify and incorporate this uncertainty when making clinical decisions."

AI, for example, can quickly provide doctors with quantitative measurements of tumor volume—but how dependable are those measurements, and how much do patients care? By shedding light on the uncertainty levels associated with these measurements and other imaging findings, both patients and doctors will be better able to assess the potential tradeoffs of using those findings in decision-making processes. 

"The ability to quantify uncertainty provides the patients an added dimension to make more informed choices about their clinical decisions. If the output of an AI-based tool indicates aggressive therapy but with high uncertainty, some patients may be risk-averse and assign more weight to uncertainty, others may value the treatment benefits, while still others may be neutral," Jha says. 

Understanding patient attitudes

After coming up with a method to quantify uncertainty, Jha will shift gears to focus on the second part of his research: developing a patient questionnaire that assesses patient attitudes toward risks when an AI-based tool is part of the decision-making process, especially when given information on uncertainty levels associated with AI-related findings. 

The questionnaire will build on noteworthy results from a prior survey that Jha worked on. This revealed that, when given the choice between AI making diagnoses alone, their doctor making decisions alone, or something in-between, most patients want to see AI incorporated, at least to a certain degree.

"From a small survey that we [previously] conducted, it appeared that patients may want AI to assist physicians. [But] also, patients do want the AI’s uncertainty to be incorporated into a final decision. These findings did surprise many," Jha says. 

To develop the questionnaire, Jha and his team will also work with philosophy professor Anya Plutynski, PhD, who also worked on the first survey; radiation oncologist Clifford Robinson, MD; and nuclear medicine physician Tyler Fraum, MD. 

More informed choices 

Overall, the ability to quantify uncertainty—combined with a better understanding of patients' tolerance for such uncertainty—can offer a fuller picture into how to ethically incorporate AI-based imaging informatics into real-world decisions on how to treat patients, Jha suggests. 

More:  

We anticipate that this work will give physicians and patients the ability to make more informed choices when AI-based tools are used as part of the clinical decision-making process. This would strengthen the confidence in the use of these tools, leading to more trustworthy AI."

Trimed Popup
Trimed Popup