‘Insufficient governance of AI’ is the No. 2 patient safety threat in 2025
“Insufficient governance of artificial intelligence” is the No. 2 patient safety threat in 2025, according to the latest list from ECRI released March 10.
AI has been present in healthcare for years, incorporated into a growing list of applications including imaging, clinical decision support and medical notes generation, the nonprofit notes. Such tech can produce myriad benefits but also comes with potential hazards to patients and providers.
The patient safety organization—formerly known as the Emergency Care Research Institute—compiled its latest annual top 10 based on a “wide scope of data.” It aimed to pinpoint the “most pressing threats to patient safety” via scientific literature, safety incident reports and other sources. “Dismissing patient, family and caregiver concerns” came in at No. 1 this year.
“AI models are only as good as the algorithms they use and the data on which they are trained,” ECRI said in its report. “When AI models are based on bad data, they can increase the chances of an adverse event. Medical errors generated by AI could compromise patient safety and lead to misdiagnoses and inappropriate treatment decisions, which can cause injury or death.”
Despite these dangers, only about 16% of hospital executives surveyed in 2023 said they have a systemwide governance policy for AI use and data access, an analysis found. Bias is one potential concern, ECRI notes, with models trained on flawed data potentially exacerbating inequities related to race, gender or socioeconomic status.
“Failure to develop systemwide governance to evaluate, implement, oversee, and monitor new and current AI applications may increase healthcare organizations’ liability risks,” the report noted. “However, it can be challenging to establish policies that can adapt to rapidly changing AI technology.”
ECRI offers radiology departments and practices a list of recommended actions on AI governance, broken down into four categories:
1. Culture, leadership and governance: Possible responses include establishing policies, forming a committee to evaluate new technology, ensuring the organization is following federal and local laws, training staff on the AI policy, and assessing outcomes regularly.
2. Patient and family engagement: Disclosing the use of AI to patients and obtaining their consent, soliciting feedback, and engaging patient and family advisory councils to help educate about the use of generative AI.
3. Workforce safety and wellness: Performing assessments of clinical workflows when new AI technologies are implemented, assessing the user experience regularly, and taking staff safety concerns seriously.
4. Learning system: Implementing a robust reporting system for AI-related incidents, emphasizing to staff that they should defer to their own clinical judgment when questioning AI-aided decisions, and educating team members on how to identify errors or adverse events.
ECRI’s list includes other safety concerns relevant to radiology such as cybersecurity breaches at No. 4 and diagnostic errors at No. 7. You can download the full report for free here. “Challenges transitioning newly trained clinicians from education into practice” was the No. 1 patient safety threat in 2024.
Here’s the full list:
- Dismissing patient, family and caregiver concerns.
- Insufficient governance of artificial intelligence.
- Spread of medical misinformation.
- Cybersecurity breaches.
- Caring for veterans in nonmilitary health settings.
- Substandard and falsified drugs.
- Diagnostic error in cancers, vascular events and infections.
- Healthcare-associated infections in long-term care facilities.
- Inadequate coordination during patient discharge.
- Deteriorating working conditions in community pharmacies.