‘Life or death consequences’: AMA pushes for greater transparency in imaging AI
The American Medical Association passed a resolution at its recent annual meeting, urging for greater transparency in artificial intelligence use for radiology and other specialties.
AMA wants to “maximize trust” among docs and the public around how these models reach their conclusions. It’s advocating for “explainable AI tools” that incorporate safety and efficiency data, with detailed explanations behind their output.
The country’s largest physician lobbying group also wants more oversight and regulation of augmented intelligence and machine learning algorithms used in clinical settings, according to a June 11 announcement. AMA wants a third party—such as a medical association like itself or federal regulators—to determine if AI algorithms are explainable, rather than relying on a vendor’s potentially biased claims.
“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their healthcare,” radiologist and AMA Board Member Alexander Ding, MD, MS, MBA, said in a statement. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible and impactful tools used in patient care.”
When clinical AI is not explainable, the radiologist or other physician’s training and expertise are “removed from decision-making,” the AMA Council on Science and Public Health report that served as the basis for this policy noted. This may place rads in a predicament, where they must act on information without any way to assess accuracy. Proprietary property concerns should not serve as rationale for refusing to explain AI output, the AMA believes.
“To this end, the new policy states that while intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications,” the association emphasized.
Politico also reported news of the resolution on Thursday.