AI generates imitation lung X-rays replete with diagnosable pathologies
Stanford researchers have created synthetic yet highly realistic chest X-rays by customizing an open-source AI model called Stable Diffusion for rendering text as images.
The model is typically used by the general public for creating art, not supporting science. But in this case the “radiographs” are of high enough quality that the technique might come to substitute for real-world image datasets when the latter are too paltry to drive clinical research.
The study abstract is posted at the preprint server arXiv [1].
In the summary, graduate researcher Pierre Chambon, radiologist Christian Bluethgen and colleagues describe their work expanding the “representational capabilities of large pretrained foundation models to medical concepts, specifically for leveraging the Stable Diffusion model to generate domain specific images found in medical imaging.”
The authors state their best-performing model generated lifelike lung “abnormalities” and attained 95% accuracy at detecting such findings.
In a news item published Nov. 29 by Stanford’s Institute for Human-Centered Artificial Intelligence, Chambon notes the team set out to try adapting the existing open-source foundation model with only minor tweaks.
Commenting on the experiment’s success, he says there’s “a lot of potential in this line of work.”
The Stanford news writer suggests the study represents a “promising breakthrough that could lead to more widespread research, a better understanding of rare diseases and possibly even development of new treatment protocols.”
The piece quotes study co-author Christian Bluethgen expressing amazement over the quality of the images the team came up with.
“Typing a text prompt and getting back whatever you wrote down in the form of a high-quality image is an incredible invention—for any context,” Bluethgen says. “It was mind-blowing to see how well the lung X-ray images got reconstructed. They were realistic, not cartoonish.”