7 lessons learned during joint big business/healthcare AI projects

Big Tech players have been investing in partnerships with large healthcare providers on AI endeavors for several years now. According to both sides in one such collaboration, the resulting synergy offers “immense potential” to improve patient access, care and outcomes.

The collaborators making the claim hail from Microsoft Corp. and Johns Hopkins Medicine. JACR published their opinion piece discussing the matter March 30 [1].

“One of the first lessons we learned was that, if you want to impress people, your solution can be complex,” they write, “but if you want to have an impact on the world, your solutions need to be simple enough to be implemented.”

The paper is lead-authored by Juan Lavista Ferres, MSc, chief scientist and lab director of Microsoft’s AI For Good research lab.

Its corresponding author is Johns Hopkins radiologist Linda Chu, MD.

Noting Microsoft’s investments of $40 million in general global health and $20 million specifically for COVID-19 relief and research—both via AI for Good sub-initiative AI for Health—the authors list seven discrete lessons they’ve learned while doing the work.

1. For some world problems, relying on AI is the only option we have. Lavista Ferres and co-authors point to AI’s success diagnosing easily preventable diabetic retinopathy in regions lacking ophthalmologists. They cite research showing AI models have upwards of 97% accuracy detecting diabetic retinopathy. However, they add,

Despite AI’s promising potential, it comes with challenges and limitations of its own.”

 

2. AI expertise alone cannot solve these problems; we need to collaborate with subject matter experts.

Machine learning excels at prediction and correlation but not at identifying causation. It doesn’t know the direction of the causality.”

 

3. Many organizations have subject-matter expertise but are unable to attract or hire the AI talent needed to solve these types of problems. Outside of technological and financial industries, most organizations lack the capacity or the infrastructure to hire AI talent, the authors point out. As a result,

Many global problems are often left behind as AI continues to play a role in our society.”

 

4. We can be fooled by bias. The authors summarize a 1991 study showing left-handed people had lifespans an average nine years shorter than righties. Later it came to light many left-handers had “converted” to right-handedness due to external pressures, skewing the findings. “Researchers had assumed that the percentage of left-handed people is stable over time; the population, though random, is biased against lefthanded people.”  

Most data we collect has biases, and if we do not understand them and take them into account, our data models will not be correct.”

 

5. We forget that correlation or predictive power does not imply causation; the fact that two variables are correlated does not imply that one causes the other. Lavista Ferres and co-authors cite a Gallup survey showing 64% of Americans believe correlation implies causation.

We have to understand that most people don’t know the difference.”

 

6. Models are very good at cheating; if there is anything the model can use to cheat, it will learn it. Here the authors cite a study in which an AI model was trained to distinguish between skin cancer and benign lesions. The model seemed to achieve dermatologist-level performance. “However,” Lavisa Ferres and colleagues write, “many of the positive cases had a ruler in the picture, while the negative cases did not.”

The model learned that, if a ruler appeared in the image, there was a much higher chance of the patient having cancer.”

 

7. Access to data is one of the biggest challenges we face. Much medical data cannot be opened to researchers due to privacy issues, the authors point out. Meanwhile the risk always exists of someone attempting to deanonymize supposedly “anonymized” data. The good news:

Privacy-preserving synthetic images can … generate realistic synthetic data, including synthetic medical images, after training on a real dataset, without affecting the privacy of individuals.”

Lavista Ferres et al conclude that AI “provides us with numerous opportunities for advancement in the field of radiology: improved diagnostic certainty, suspicious case identification for early review, better patient prognosis and a quicker turnaround. Machine learning depends on radiologists and our expertise, and the convergence of radiologists and AI will bring forth the best outcomes for patients.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup