Most referrer requests for imaging are inadequate, new scoring system shows
Most referrer requests for medical imaging are inadequate due to insufficient information or reasoning, according to a single-center study published Friday.
Amid calls to reduce the rate of low-value imaging exams, experts have developed a scoring system for radiology requests. Dubbed the Reason for Exam Imaging Reporting and Data System or “RI-RADS,” the tool aims to standardize the clinical information included in radiology request forms.
Italian imaging experts sought to study RI-RADS in action, retrospectively applying the system to a collection of 762 requests at their institution. They discovered that most were inadequate based on RI-RADS, and especially those for routine exams, according to a study published in Insights into Imaging [1].
“This could be because the purpose of these examinations seems obvious, leading to less effort in crafting a detailed request,” Marco Parillo, in the radiology department of the public healthcare system in Trento, Italy, and colleagues wrote Nov. 8. “However, even for these routine procedures, clear physician input is essential. Including a well-defined clinical question, relevant clinical information, and the physician’s impression of the imaging request can reduce the risk of diagnostic errors.”
The analysis incorporated consecutive inpatient requests for CT, MRI and radiography during a two-month period in 2023. All referrals originated from a 357-bed secondary care university hospital affiliated with the Italian National Health Service. A single radiologist assessed all imaging requests and assigned a score using RI-RADS. Only the in-hospital requisition forms—which were digital and unstructured—were included in the study. The rad did not have access to other information such as electronic health records. Final RI-RADS grade was based on impression, clinical information and diagnostic questions.
Of the nearly 800 requests, only about 1% earned a RI-RADS “A” grade, indicating adequacy. Meanwhile, about 7% scored a B (barely adequate), 31% a C (considerably limited request), 53% a D (deficient), and 8% an X (most deficient). Indication for imaging, body region and requesting specialty significantly influenced RI-RADS scores.
For example, requests for routine preoperative imaging and device checks (e.g., radiography to check pleural drains or catheters) had high risk of earning a poor RI-RADS score. The upper extremities represented the body region with the greatest risk of resulting in a shoddy request. Cardiovascular surgeons, intensive care specialists and orthopedists were the specialties most likely to earning a failing RI-RADS grade. This also was likely because many requests from these three specialties were for routine reasons after a procedure.
Parillo et al. also analyzed the reliability of RI-RADS with four observers, finding “substantial” reproducibility.
“As healthcare providers become more familiar with RI-RADS criteria, interrater agreement is likely to improve,” the authors noted. “A key factor potentially driving RI-RADS adoption could come from artificial intelligence. By leveraging the impressive text analysis capabilities of large language models, physicians could potentially input their radiology request in the hospital’s electronic ordering system and receive the RI-RADS grade in real time, enabling them to adjust the completeness of the request accordingly.”
Read more about the findings, including potential study limitations, at the link below.