Radiological AI may never dream of interpreting images—but don’t underestimate its virtual cognitive capacity

In the same year three humans first orbited the moon, 1968, the sci-fi writer Philip K. Dick published Do Androids Dream of Electric Sheep? The novel imagined a dark, futuristic society in which real people couldn’t be readily distinguished from lifelike androids.

Given artificial intelligence of that caliber, Dick wondered, might androids have not only conscious minds but also subconscious cognition?

Six months after ChatGPT burst onto the scene, an academic radiologist is sufficiently inspired by the fiction (which inspired the Blade Runner movies of 1982 and 2017) to ask a related question specific to the medical specialty:

“Does ChatGPT-4 dream of counting electric nodules?”

Serious discussions going in surprising directions

Unsurprisingly, the radiologist, Christian Blüthgen, MD, of University Hospital Zurich in Switzerland and Stanford University’s Center for Artificial Intelligence in Medicine and Imaging in California, answers in the negative. However, there’s more on his mind than that:

It is an easily committed fallacy to anthropomorphize large-language [AI] models like ChatGPT and assume an ‘understanding,’ but it is nonetheless remarkable that discussions in these directions are currently unfolding, even among seasoned AI researchers.”

Operative to Blüthgen’s point, fleshed out in an opinion piece published April 26 in European Radiology [1], is that radiologists work with multimodal “inputs,” meaning words as well as images, every day.

Meanwhile the latest version of ChatGPT is increasingly able to not only handle pictorial as well as verbal inputs but also to explain them in the context of other multimodal inputs, the researcher notes.

“While GPT-4’s multimodal capabilities are currently restricted to a small group of researchers, other powerful vision-language models are already available for radiology today,” he points out. “Fine-tuned text-to-image models are able to synthesize chest X-rays, whose appearance can be controlled through text prompts.”  

Further, current capabilities already include some helpful workload relievers. Blüthgen suggests large-language AI technology can:

  • Help handle unstructured data for summarizing research papers.
  • Aid in de-identification tasks.
  • Help non-native English speakers share ideas with proper and concise English.
  • Assist with formatting tasks, as when adapting text to a journal’s preferred style.
  • Offer coding support for “academic radiologists looking into programming, e.g., for generating code snippets for scientific plotting, providing debugging support and much more.”

‘We are living in the future’

While allowing that large-language AI is presently incapable of contributing to clinical radiology absent close human supervision—dicey alignment of model outputs with human values is just one major hurdle—Blüthgen remains sanguine about its future in the field.

“Large-language models are beginning to display (sometimes unexpected) emerging abilities and hold tremendous potential for radiology,” he writes.

More:

Whether or not this will entail dreaming about counting electric nodules, for us radiologists, it does not matter as much as the fact that it will most likely, in the not-too-distant future, be possible to instruct an AI system to perform this task. We are living in the future, and the future of AI in radiology is multimodal.”

Read the rest.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The patient, who was being cared for in the ICU, was not accompanied or monitored by nursing staff during his exam, despite being sedated.

The nuclear imaging isotope shortage of molybdenum-99 may be over now that the sidelined reactor is restarting. ASNC's president says PET and new SPECT technologies helped cardiac imaging labs better weather the storm.

CMS has more than doubled the CCTA payment rate from $175 to $357.13. The move, expected to have a significant impact on the utilization of cardiac CT, received immediate praise from imaging specialists.