More than 40% of Americans are generally OK with the thought of AI reading their chest x-rays. Moreover, some 12.3% are very comfortable with the prospect.
However, only 6% are well at ease with the notion of a cancer diagnosis coming from an algorithm, and just over a quarter are only “somewhat comfortable” with that scenario.
The findings are extrapolated from a survey of 926 patients who were empaneled online to represent the U.S. adult population at large. The study was conducted at Weill Cornell Medicine and Yale Medicine over a two-week period in December 2019. Participants were chosen using weighted criteria to avoid sample biases and predict nonresponse rates.
JAMA Network Open has posted the study as a brief research letter.
Hospitalist Dhruv Khullar, MD, MPP, therapeutic radiologist Sanjay Aneja, MD, and colleagues found most patients expect AI to make healthcare somewhat better or much better (44.5% and 11%, respectively; 55.5% combined).
By contrast, relatively few believe the technology will render care somewhat worse or much worse (4.3% and 2%). Just under 20% said they don’t know which way things will go with AI in healthcare.
Meanwhile most respondents expressed some or much concern about AI’s unintended consequences, including misdiagnosis (91.5%), privacy breaches (70.8%), less time with clinicians (69.6%) and higher healthcare costs (68.4%).
“A higher proportion of respondents who self-identified as being members of racial and ethnic minority groups indicated being very concerned about these issues, compared with White respondents,” the authors note.
More findings from the study:
- 66% of panel members deemed it very important that they be informed when AI helped diagnose their condition or guide their treatment
- 40.5% felt somewhat uncomfortable, and 31% felt very uncomfortable, about getting a diagnosis from an AI system that had 90% accuracy but gave no indication of its rationale (the “black box” effect)
- The fifth of patients who don’t know whether AI will improve or worsen healthcare were much more likely than their decisive peers to consider it very important to know when AI played even a small role in their medical decision-making (59.7% vs. 42.3%)
The “don’t knows” also were very uncomfortable to a higher degree with receiving an AI diagnosis that was accurate 98% of the time but could not be explained (26.7% vs. 18.8%).
The authors acknowledge as a limitation their study design’s suboptimal generalizability due to the use of a survey-friendly panel.
Clinicians, policy makers, and developers should be aware of patients’ views regarding AI. Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making. Future work should examine how views evolve as patients become more familiar with AI.”
Dhruv Khullar, MD, MPP; Lawrence Casalino, MD, PhD; Yuting Qian, MSc; Yuan Lu, ScD; Harlan Krumholz, MD, SM; Sanjay Aneja, MD; “Perspectives of Patients About Artificial Intelligence in Health Care.” JAMA Network Open, May 4. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2791851
More coverage of healthcare AI: