Thursday, January 31, 2019

A.I. Could Worsen Health Disparities

In a health system riddled with inequity, we risk making dangerous biases automated and invisible.


By Dhruv Khullar
Dr. Khullar is an assistant professor of health care policy and research.

Artificial intelligence is beginning to meet (and sometimes exceed) assessments by doctors in various clinical situations. A.I. can now diagnose skin cancer like dermatologists, seizures like neurologists, and diabetic retinopathy like ophthalmologists. Algorithms are being developed to predict which patients will get diarrhea or end up in the ICU, and the FDA recently approved the first machine learning algorithm to measure how much blood flows through the heart — a tedious, time-consuming calculation traditionally done by cardiologists.

It’s enough to make doctors like myself wonder why we spent a decade in medical training learning the art of diagnosis and treatment.

There are many questions about whether A.I. actually works in medicine, and where it works: can it pick up pneumonia, detect cancer, predict death? But those questions focus on the technical, not the ethical. And in a health system riddled with inequity, we have to ask: Could the use of A.I. in medicine worsen health disparities?

There are at least three reasons to believe it might.

The first is a training problem. A.I. must learn to diagnose disease on large data sets, and if that data doesn’t include enough patients from a particular background, it won’t be as reliable for them. Evidence from other fields suggests this isn’t just a theoretical concern. A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?
(continues)
...humans, not machines, are still responsible for caring for patients. It is our duty to ensure that we’re using AI as another tool at our disposal — not the other way around. 
==
Health news

2 comments:

  1. This technology goes beyond the aspects of the medical field, algorithms can become predictive in the most mundane situations. Have you ever wondered how you get particular advertisements in the mail? You know yourself best so when your searches, products you buy, and answer surveys(all with your consent, its what those "cookies" are for), a company like, lets say Target, can send you advertisements for things you might really need. Mistakes happen, so when a teenager is sent cupons for diapers, her parents were outraged. However, with so many inputs, sometimes algorithims can be correct. You can read more about it in this article: http://techland.time.com/2012/02/17/how-target-knew-a-high-school-girl-was-pregnant-before-her-parents/
    There are just as many issues with human studies as well, the Human Genome Project, the project that collected and compared the sequence of monomers that make up the instructions for our life, was originally done with only 5 separate people's blood, each diverse according to reports.(You can read more about that here: http://www.genomenewsnetwork.org/articles/02_01/Whose_genome.shtml ) That replication is quite low with the supposed variation with in our species. This is the reason why we still study the genome today, because there are gaps. Technology is a tool, it can only be used as far as it is built for. Our phones can take pictures, search for information, and come to conclusions only based on what it has access to. On the other hand, a doctor, with experience, can use a phone to augment their own knowledge and assist their judgement.

    ReplyDelete
  2. DQ:

    If all diagnostic AI are running off the same software, will they have the capacity to have different “opinions” or “ideas” about a diagnosis, or will they all have the same thoughts?

    Furthermore, if they are all providing the exact same diagnosis, is it worth the sacrifice of potential unidentified misdiagnoses?

    ReplyDelete