Could AI algorithms result in racial bias?
Artificial intelligence might be a hot tech topic, but it could also pose ethical risks—namely racial ones—to healthcare, Clinical Innovation + Technology reported this month.
A recent history of racial bias has plagued AI, according to Clinical Innovation + Technology. Evidence from the Framingham Heart Study, which used AI to predict risk of cardiovascular events in a non-white population, showed racial bias in both over- and under-estimations of risk.
“The use of machine learning in complicated care practices will require ongoing consideration, since the correct diagnosis in a particular case and what constitutes best practice can be controversial,” Danton Char, MD, who co-wrote a paper on the subject, said. “Prematurely incorporating a particular diagnosis or practice approach into an algorithm may imply a legitimacy that is unsubstantiated by data.”
Char said he and his colleagues worry racial discrimination generated by AI could leak into healthcare algorithms if inclusive data is unavailable. If an algorithm designed to predict patient outcomes was created with insufficient data about a certain population as a whole, for example, that algorithm could end up being racially biased.
Char’s team is currently pushing for updated ethical guidelines for machine learning and artificial intelligence practices.
Read the full story here: