NLP to aid point-of-care imaging informatics?

“The patient refused an autopsy.” “Discharge status: alive but without permission.” “Patient has two teenage children but no other abnormalities.” Classic comedic lines from Monty Python’s Flying Circus? Nope. Real-world examples of muffed medical dictation that could have been caught upon utterance by natural language processing (NLP) technology.

The use cases were offered as examples of free text run amok by Kyle Silvestro, founder and CEO of SyTrue, a Chico, Calif.-based healthcare data-refinement company that supplies NLP solutions to vRad, the 350+-radiologist practice headquartered in Eden Prairie, Minn.

Along with Shannon Werb, vRad’s CIO, Silvestro spoke at a March 31 webinar on NLP presented by vRad.

Silvestro said physicians create around 2 billion clinical notes and reports each year. That’s 95 new notes every second, or 8 million new notes a day.

“The average human going through these documents manually, seeing anywhere between 40 and 100 documents per day—we just don’t have the labor force to deal with this challenge,” he said, suggesting that countless opportunities to capture rich, predictive health data are going by the wayside.

“It’s not enough just to be able to go through information; you also have to be able to extract and normalize and validate that information,” Silvestro said. “You have to be able to make it usable for your end-users. Think of a Google on steroids, where you have the ability to ask natural-language questions of a platform or a technology.”

Read, process, understand, extract

Silvestro defined NLP as the ability for a computer to read, process, understand and extract clinical information out of free-text documents or text files and dictated notes, such that the information can be coded in interoperable terminologies like ICD-10, LOINC or RxNorm “in order to create value from processes downstream.”

Picture an informaticist using voice-recognition software to ask the platform to find all male patients between 35 and 40 with diabetes or hypertension who are on narcotics and do not have chest pain. To be able to get that information back very concisely, one needs the capability to “harness that kind of collective intelligence” from millions of doctor-patient encounters, said Silvestro.

“All of this can come from data that’s being created every day but that needs to be actively identified, extracted, normalized and then driven for downstream use,” he said. “Once you have that information, really what you can move into is collecting intelligence. You will be able to create different indexes or differentiators for you and your organization.”

NLP can help advance the management of this sort of strategic data as U.S. healthcare shifts into a value-based care model, allowing radiologists to negotiate contracts with hospital groups and reimbursement rates with payers, Silvestro said.

The data “can be used to show the efficacy of the services you are providing,” he added. “You as a radiologist might be able to start quarterbacking the care of an individual and drive that care through the information that is being created, using these technologies to start impacting decisions and downstream outcomes.”

Information, naturally

For his part, Werb expressed his excitement over where NLP may allow vRad to go as the practice works with SyTrue to “take this information and start moving it away from retrospective claims analysis”—NLP’s current use among most provider organizations—“to real-time clinical analytics at the point of documentation.”

Werb noted that VRad handles many millions of radiology exams each year and has amassed a database of more than 30 million results. Its clinical repository is growing by 15,000 to 20,000 entries per day.

“We don’t yet have the ability for the radiologist to ask unstructured questions of the data set that we have, unless it’s very specific metadata that we happen to be storing in our RIS or our database,” he said. “What if we could allow, by a natural-language query, the radiologist to ask questions of reports we’ve already interpreted?”

So armed, he added, the radiologist would have access to a trove of information at the point of exam interpretation.

Looking further ahead, Werb said vRad may look to use NLP to help its rads understand what prior information is available and relevant to a current study.

“If you think about radiology interpretation today, we work really hard with very static rule sets to build algorithms to go after relevant prior reports and relevant prior images,” he explained. “But that’s typically based on things like body part or procedure or type or imaging modality or maybe a combination thereof.”

Werb hypothesized a scenario in which the reading radiologist automatically receives a decision-support prompt combining all of that intel plus, for example, the reason or reasons the patient came before the physician in the first place.

“We are not doing that today. I’m not aware of anyone else who is doing that today,” said Werb. “But we think that, by moving NLP to the point of care, we can gain real-time value from it as we are interpreting exams.”

vRad has posted the NLP webinar in its entirety for on-demand access (registration required).  

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup