NightHawk Offers Model for QA
It’s easy to let quality assurance (QA) slip into a lip-service category, but that is something that a nighttime stat-reading teleradiology service can’t afford to do—particularly if it is an industry leader like NightHawk Radiology Services. Dionne Watts, quality-assurance supervisor, says “QA for teleradiology is important because the client facility doesn’t know our radiologists. They like to see how good we are. They don’t personally see us on a day-to-day basis, so there is a trust factor that takes a lot longer to build. QA is a part of that.”
Dionne Watts Watts, an Australian who now works at NightHawk headquarters in Coeur d’Alene, Idaho, says that NightHawk put its QA program together eclectically, using Joint Commission requirements, ACR guidelines, and HIPAA regulations to frame its QA structure. NightHawk went further, though. The company researched QA at other big health care institutions and then added some ingenuity of its own to devise a program that met its special needs as a teleradiology provider. The result is a QA program that could be a model for other health care providers—and in fact, it often is. Watts calls it a “robust, reliable, and educational program.” NightHawk’s QA program isn’t simple. It can be quite complicated, but it’s built on a simple framework: image interpretation errors or omissions are reported, reviewed, and studied for prevention next time. Reports Only One lesson that NightHawk learned was not to mix image-quality or patient-positioning issues with interpretation issues, Watts says. In the beginning, QA did handle image-quality issues—poor images, poor transmission, or artifacts—but that proved too distracting to the performance of QA on the interpretation side. Now, all image-quality and transmission problems are handled by NightHawk’s customer-service and IT departments, working with the client hospital’s corresponding teams or with the technologist sending the images. Issues of turnaround time on reports are also handled by customer service, Watts explains. NightHawk’s QA is reserved for interpretation issues. “We deal primarily with report quality. The product we supply—that’s QA’s domain. Anything to get to that point, like image transmission, falls to customer service,” Watts says. QA being the complex domain that it is, inevitably, there are overlaps where QA, customer service, and IT may confer on an interpretation issue. Watts uses an example of NightHawk radiologists missing an increased number of appendicitis cases. It turned out that the radiologists were having trouble visualizing the appendix due to a glitch in the workstation. “We had a QA conference and determined that the workstation needed an upgrade. That was a useful outcome from a collaborative conference,” Watts recalls. Big Picture To understand the NightHawk implementation of QA, it helps to know more about NightHawk itself. The company only makes use of US board-certified radiologists to provide readings for US health care institutions. The NightHawk radiologists, who are reading from Sydney, Australia, and Zurich, Switzerland, are interpreting during their daytime hours, so no one is reading while tired. When a client hospital sends a case to NightHawk for interpretation, it uses a requisition that includes the patient data, the number and type of images to be read, and pertinent patient history, Watts says. She says the last item is especially important, noting, “We’re not there and we can’t see the patient. It’s imperative to make the clinical history accurate for us.” NightHawk provides preliminary and final interpretations for its US hospital clients, most often stat readings for nighttime emergency-department patients. The preliminary reports that NightHawk provides must be overread by the client hospital’s own radiologists and a final report must be issued. This is the key step on which NightHawk’s QA program hinges. Without the client hospital’s overreadings, there would be no feedback on the NightHawk preliminary reports sufficient to instigate the QA process. When NightHawk began providing its service in the 1990s, it was controversial because foreign jurisdictions were suddenly getting involved in US health care. There was also the complaint that radiological interpretations were being commoditized. Despite those concerns, the company has prospered and grown, and its stock is now publicly traded. It has also branched into daytime reading, producing final readings for clients with too much volume, or no radiologist on staff. On its Web site, NightHawk reports that its 122 radiologists read for more than 1,350 hospitals in the United States. NightHawk also operates a fully redundant LAN/WAN infrastructure using the second-largest virtual private network in the United States. While NightHawk, like most companies, is guarded about its proprietary statistics, Watts does say, “We’re looking at about 8,000 preliminary or final reports per night. During the day, we read 500 to 600 finals.” Of the preliminary reports, she adds, about 70% are produced in Sydney, 25% in the United States, and 5% in Zurich. Discrepancy Reports Reported discrepancies are the heart and soul of NightHawk QA. This begins with the client hospital filing a discrepancy form. According to Watts, “A discrepancy is a significant variance between the site findings and the NightHawk report pertaining to information and images supplied to NightHawk Radiology at the time of interpretation.” Discrepancies can be either acute or nonacute. Incidental findings—Watts uses the example of nonobstructing renal cysts or calcifications—may not always be included in the NightHawk preliminary report, and their absence is not considered a true discrepancy. Discrepancy reports dealing with acute clinical issues could include a missed hemorrhage, free air, or pathology accounting for symptoms of high urgency. Discrepancy reports on nonacute issues could include unreported findings of a mass needing further workup (though not related to the acute presentation) or missed pathology accounting for low-urgency symptoms, such as sinusitis. When a discrepancy report is filed, the details are entered into a QA ticketing system that NightHawk uses to track and collate its QA data. Included in the ticketing system are the final report generated by the hospital radiologist and other pertinent information. Follow-up reports, such as a discharge summary that NightHawk has requested, may also be included. The discrepancy report is sent to the NightHawk radiologist who did the original interpretation, along with the images for review. Whether the discrepancy involves an acute or a non-acute issue, that NightHawk radiologist is required to reply to the client site concerning the alleged error. Internal Peer Review and Discrepancy Rates Nighthawk recently commenced an internal peer review of its final readings. A final peer review (FPR) committee has been formed of several NightHawk radiologists, the QA manager, and NightHawk’s QA medical director. The FPR committee reviews 1% of all NightHawk final reports, randomly chosen. The images are restored and sent to the FPR committee radiologists, who send any noted discrepancies to the QA staff. In the case of noted discrepancies, the original interpreting radiologist completes an addendum to correct the report. The facility is contacted to ensure that it is fully informed of the change of findings and to ensure that the patient is treated correctly, as advised by ACR guidelines, Watts says. The discrepancy rate is an important benchmark of a company’s competence and expertise. At NightHawk, it is watched carefully by QA. “For the industry, anything under 1% is considered positive,” Watts says. “We sit at about 0.3%.” Still, 0.3% can add up; if NightHawk is sending out 8,000 preliminaries nightly, then 24 of them may end up with a discrepancy filing. Two dozen discrepancy issues to deal with per day is enough to keep Watts and her six colleagues busy. That’s because the QA process only begins with the discrepancy report and its review by the NightHawk radiologist involved. That radiologist may review the case and simply acknowledge the client’s findings on final interpretation, but at times, Watts says, the NightHawk radiologist will question the discrepancy report. In that case, follow-up discharge reports, pathology reports, and other information may be requested from the submitting client hospital. This added information helps NightHawk assess how well its QA is working, Watts explains. Gathering the data is up to the QA team. NightHawk radiologist can also ask for a blind review, which is conducted in-house by NightHawk radiologists. One blind review is done by a diagnostic radiologist and a second blind reading is assigned to a NightHawk radiologist with a subspeciality in that particular field. The assignments are made on a rotating basis and are completed anonymously to maintain objectivity. This information is sent to the original interpreting radiologist, who may then request a review of the discrepancy by the client facility’s radiologists, along with a request that the client remove the case from its QA records. “That is what happens approximately 10% of the time,” Watts says. If the site is in agreement, the discrepancy is not counted statistically and does not shows up in the 0.3% reported discrepancy rate. Privileging and Tracking Trends All NightHawk radiologists must have privileges at all of the hospitals for which they read. The logistics of keeping them privileged constitute “a huge task,” Watts says, and NightHawk tries to avoid having clients ask for radiologists to be removed from that hospital’s list of interpreters. Nonetheless, such requests do occur, and NightHawk complies immediately with the request. “As we take these requests seriously, the medical executive team reviews all relevant data related to the site and the radiologist. This is to review the reasons and determine any possible resolutions for these requests. Sometimes we discuss the matter with the client facility and have the radiologist reinstated,” Watts says. While the resolution of discrepancy reports is central to NightHawk’s QA process, the company has put other safeguards in place to make its program even more robust. Each month, the QA medical director reviews the submitted discrepancies and the NightHawk radiologists’ responses. This gives the medical director an excellent understanding of the overall trends within the QA program. The QA percentages are also analyzed on a monthly basis by the QA committee, which is composed of the medical executive team and includes the QA medical director, the CEO, and the QA manager. Though the QA program is primarily conducted for educational purposes, the QA data provide an excellent review of trends for individual radiologists and NightHawk as a whole. In addition, the QA medical director selects a number of discrepancy cases for discussion in CME-accredited monthly QA conferences, where images, findings, and interesting articles are discussed and analyzed by NightHawk-affiliated radiologists. The conferences are held in Zurich and Sydney, and remote radiologists can listen in while viewing images onscreen. This enables the radiologists to earn one CME credit for each conference they attend. Watts says, “If they do 12 per year, that’s a large chunk” of the CME credits that they need. Sentinel events are defined by Watts as “the expiration, or severe physical or psychological disabling, of a patient as a direct result of NightHawk services or product,” and NightHawk QA has a Joint Commission-mandated responsibility to document sentinel events as to severity, and to perform a root-cause analysis to determine whether processes need to be revised to prevent them. All sentinel events are logged and reviewed by the QA committee to determine their root cause and to devise action plans to avoid such occurrences in the future. NightHawk is meeting or exceeding Joint Commission standards and ACR guidelines for reporting and reviewing discrepancies. Watts says, “NightHawk has created a system that proves an excellent medium for both our client facilities and our affiliated radiologist to improve the quality of our product and the overall service to our ultimate client—the patients.”
Dionne Watts Watts, an Australian who now works at NightHawk headquarters in Coeur d’Alene, Idaho, says that NightHawk put its QA program together eclectically, using Joint Commission requirements, ACR guidelines, and HIPAA regulations to frame its QA structure. NightHawk went further, though. The company researched QA at other big health care institutions and then added some ingenuity of its own to devise a program that met its special needs as a teleradiology provider. The result is a QA program that could be a model for other health care providers—and in fact, it often is. Watts calls it a “robust, reliable, and educational program.” NightHawk’s QA program isn’t simple. It can be quite complicated, but it’s built on a simple framework: image interpretation errors or omissions are reported, reviewed, and studied for prevention next time. Reports Only One lesson that NightHawk learned was not to mix image-quality or patient-positioning issues with interpretation issues, Watts says. In the beginning, QA did handle image-quality issues—poor images, poor transmission, or artifacts—but that proved too distracting to the performance of QA on the interpretation side. Now, all image-quality and transmission problems are handled by NightHawk’s customer-service and IT departments, working with the client hospital’s corresponding teams or with the technologist sending the images. Issues of turnaround time on reports are also handled by customer service, Watts explains. NightHawk’s QA is reserved for interpretation issues. “We deal primarily with report quality. The product we supply—that’s QA’s domain. Anything to get to that point, like image transmission, falls to customer service,” Watts says. QA being the complex domain that it is, inevitably, there are overlaps where QA, customer service, and IT may confer on an interpretation issue. Watts uses an example of NightHawk radiologists missing an increased number of appendicitis cases. It turned out that the radiologists were having trouble visualizing the appendix due to a glitch in the workstation. “We had a QA conference and determined that the workstation needed an upgrade. That was a useful outcome from a collaborative conference,” Watts recalls. Big Picture To understand the NightHawk implementation of QA, it helps to know more about NightHawk itself. The company only makes use of US board-certified radiologists to provide readings for US health care institutions. The NightHawk radiologists, who are reading from Sydney, Australia, and Zurich, Switzerland, are interpreting during their daytime hours, so no one is reading while tired. When a client hospital sends a case to NightHawk for interpretation, it uses a requisition that includes the patient data, the number and type of images to be read, and pertinent patient history, Watts says. She says the last item is especially important, noting, “We’re not there and we can’t see the patient. It’s imperative to make the clinical history accurate for us.” NightHawk provides preliminary and final interpretations for its US hospital clients, most often stat readings for nighttime emergency-department patients. The preliminary reports that NightHawk provides must be overread by the client hospital’s own radiologists and a final report must be issued. This is the key step on which NightHawk’s QA program hinges. Without the client hospital’s overreadings, there would be no feedback on the NightHawk preliminary reports sufficient to instigate the QA process. When NightHawk began providing its service in the 1990s, it was controversial because foreign jurisdictions were suddenly getting involved in US health care. There was also the complaint that radiological interpretations were being commoditized. Despite those concerns, the company has prospered and grown, and its stock is now publicly traded. It has also branched into daytime reading, producing final readings for clients with too much volume, or no radiologist on staff. On its Web site, NightHawk reports that its 122 radiologists read for more than 1,350 hospitals in the United States. NightHawk also operates a fully redundant LAN/WAN infrastructure using the second-largest virtual private network in the United States. While NightHawk, like most companies, is guarded about its proprietary statistics, Watts does say, “We’re looking at about 8,000 preliminary or final reports per night. During the day, we read 500 to 600 finals.” Of the preliminary reports, she adds, about 70% are produced in Sydney, 25% in the United States, and 5% in Zurich. Discrepancy Reports Reported discrepancies are the heart and soul of NightHawk QA. This begins with the client hospital filing a discrepancy form. According to Watts, “A discrepancy is a significant variance between the site findings and the NightHawk report pertaining to information and images supplied to NightHawk Radiology at the time of interpretation.” Discrepancies can be either acute or nonacute. Incidental findings—Watts uses the example of nonobstructing renal cysts or calcifications—may not always be included in the NightHawk preliminary report, and their absence is not considered a true discrepancy. Discrepancy reports dealing with acute clinical issues could include a missed hemorrhage, free air, or pathology accounting for symptoms of high urgency. Discrepancy reports on nonacute issues could include unreported findings of a mass needing further workup (though not related to the acute presentation) or missed pathology accounting for low-urgency symptoms, such as sinusitis. When a discrepancy report is filed, the details are entered into a QA ticketing system that NightHawk uses to track and collate its QA data. Included in the ticketing system are the final report generated by the hospital radiologist and other pertinent information. Follow-up reports, such as a discharge summary that NightHawk has requested, may also be included. The discrepancy report is sent to the NightHawk radiologist who did the original interpretation, along with the images for review. Whether the discrepancy involves an acute or a non-acute issue, that NightHawk radiologist is required to reply to the client site concerning the alleged error. Internal Peer Review and Discrepancy Rates Nighthawk recently commenced an internal peer review of its final readings. A final peer review (FPR) committee has been formed of several NightHawk radiologists, the QA manager, and NightHawk’s QA medical director. The FPR committee reviews 1% of all NightHawk final reports, randomly chosen. The images are restored and sent to the FPR committee radiologists, who send any noted discrepancies to the QA staff. In the case of noted discrepancies, the original interpreting radiologist completes an addendum to correct the report. The facility is contacted to ensure that it is fully informed of the change of findings and to ensure that the patient is treated correctly, as advised by ACR guidelines, Watts says. The discrepancy rate is an important benchmark of a company’s competence and expertise. At NightHawk, it is watched carefully by QA. “For the industry, anything under 1% is considered positive,” Watts says. “We sit at about 0.3%.” Still, 0.3% can add up; if NightHawk is sending out 8,000 preliminaries nightly, then 24 of them may end up with a discrepancy filing. Two dozen discrepancy issues to deal with per day is enough to keep Watts and her six colleagues busy. That’s because the QA process only begins with the discrepancy report and its review by the NightHawk radiologist involved. That radiologist may review the case and simply acknowledge the client’s findings on final interpretation, but at times, Watts says, the NightHawk radiologist will question the discrepancy report. In that case, follow-up discharge reports, pathology reports, and other information may be requested from the submitting client hospital. This added information helps NightHawk assess how well its QA is working, Watts explains. Gathering the data is up to the QA team. NightHawk radiologist can also ask for a blind review, which is conducted in-house by NightHawk radiologists. One blind review is done by a diagnostic radiologist and a second blind reading is assigned to a NightHawk radiologist with a subspeciality in that particular field. The assignments are made on a rotating basis and are completed anonymously to maintain objectivity. This information is sent to the original interpreting radiologist, who may then request a review of the discrepancy by the client facility’s radiologists, along with a request that the client remove the case from its QA records. “That is what happens approximately 10% of the time,” Watts says. If the site is in agreement, the discrepancy is not counted statistically and does not shows up in the 0.3% reported discrepancy rate. Privileging and Tracking Trends All NightHawk radiologists must have privileges at all of the hospitals for which they read. The logistics of keeping them privileged constitute “a huge task,” Watts says, and NightHawk tries to avoid having clients ask for radiologists to be removed from that hospital’s list of interpreters. Nonetheless, such requests do occur, and NightHawk complies immediately with the request. “As we take these requests seriously, the medical executive team reviews all relevant data related to the site and the radiologist. This is to review the reasons and determine any possible resolutions for these requests. Sometimes we discuss the matter with the client facility and have the radiologist reinstated,” Watts says. While the resolution of discrepancy reports is central to NightHawk’s QA process, the company has put other safeguards in place to make its program even more robust. Each month, the QA medical director reviews the submitted discrepancies and the NightHawk radiologists’ responses. This gives the medical director an excellent understanding of the overall trends within the QA program. The QA percentages are also analyzed on a monthly basis by the QA committee, which is composed of the medical executive team and includes the QA medical director, the CEO, and the QA manager. Though the QA program is primarily conducted for educational purposes, the QA data provide an excellent review of trends for individual radiologists and NightHawk as a whole. In addition, the QA medical director selects a number of discrepancy cases for discussion in CME-accredited monthly QA conferences, where images, findings, and interesting articles are discussed and analyzed by NightHawk-affiliated radiologists. The conferences are held in Zurich and Sydney, and remote radiologists can listen in while viewing images onscreen. This enables the radiologists to earn one CME credit for each conference they attend. Watts says, “If they do 12 per year, that’s a large chunk” of the CME credits that they need. Sentinel events are defined by Watts as “the expiration, or severe physical or psychological disabling, of a patient as a direct result of NightHawk services or product,” and NightHawk QA has a Joint Commission-mandated responsibility to document sentinel events as to severity, and to perform a root-cause analysis to determine whether processes need to be revised to prevent them. All sentinel events are logged and reviewed by the QA committee to determine their root cause and to devise action plans to avoid such occurrences in the future. NightHawk is meeting or exceeding Joint Commission standards and ACR guidelines for reporting and reviewing discrepancies. Watts says, “NightHawk has created a system that proves an excellent medium for both our client facilities and our affiliated radiologist to improve the quality of our product and the overall service to our ultimate client—the patients.”