Coalition acts to ensure credible, fair, transparent AI in healthcare

Having identified an “urgent” need for guardrails to keep healthcare AI from veering into an avoidable ditch, the Coalition for Health AI has put together a 24-page guide applicable to numerous groups of stakeholders.

CHAI released its “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare” April 4.

The document is intended to steer the technology along an open-ended journey in which a focus on “fairness, ethics and equity” positions tech-enabled medical progress to remain accessible, or become so, for all populations.

Or, as the authors of the paper put it, their mission is to help U.S. healthcare navigate “an ever-evolving landscape of health AI tools to ensure high-quality care, increase trustworthiness among the healthcare community, and meet the needs of patients and providers.”

‘Need for a common, agreed-upon set of principles’

The CHAI blueprint fleshes out attributes healthcare AI algorithms, software and systems should possess before being implemented in clinical settings.

Along with safety and efficacy, these include usefulness, reliability, testability and ease of use.

Further, the blueprint maintains, adopted AI iterations should be explainable and interpretable, fair (“with harmful bias managed”), secure and resilient, and privacy-enhanced.

In a section on next steps, the authors write:

Each healthcare institution may use different kinds of AI tools. However, there is a need to use a common, agreed-upon set of principles to build them and facilitate their use. Through an assurance lab, health systems as well as tool developers and vendors can submit processes and tools for evaluation to ensure readiness to employ AI tools in a way that benefits patients, is equitable and promotes the ethical use of AI.”

The authors note that large medical centers may already have such measures in place.

Access to trustworthy health AI shouldn’t depend on patient location

In any case, institutions of any size may do well to form or refresh an advisory committee to “advance the field and ensure equity so that, for a given patient, access to trustworthy health AI would not depend on where they live or with which health system they are interacting.”

The authors state the document builds on and aligns with the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” and the National Institute of Standards and Technology’s “AI Risk Management Framework.”

Stakeholder groups the CHAI blueprint considers within its target audience include data scientists, informaticists, software engineers, vendors, end users, patients, professional societies that publish clinical practice guidelines, hospital and health-system leadership, researchers and research funders, educators and medical trainees.

CHAI says the guide incorporates input from founding members of the coalition itself. These include representatives of academia, healthcare and industry who work with governmental observers from AHRQ, CMS, FDA, ONC, NIH and the White House Office of Science and Technology Policy.

Read the full blueprint here and a CHAI news item on it here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

After reviewing years of data from its clinic, one institution discovered that issues with implant data integrity frequently put patients at risk. 

Prior to the final proposal’s release, the American College of Radiology reached out to CMS to offer its recommendations on payment rates for five out of the six the new codes.

“Before these CPT codes there was no real acknowledgment of the additional burden borne by the providers who accepted these patients."

Trimed Popup
Trimed Popup