Smarter AI: The Only Way to Mitigate AI Bias in Healthcare

Researchers believe that a ‘Smarter AI’ could help address AI bias in healthcare

Smarter AI: The Only Way to Mitigate AI Bias in Healthcare

Smarter AI: The Only Way to Mitigate AI Bias in Healthcare

At the time when healthcare is considered to be the most critical industry, we need to understand the role of Artificial Intelligence (AI) in medicine. Implying technology in the healthcare sector comes with a handful of possibilities like decision support, patient care, disease management, etc. However, the major problem with technology in medicine is the bias it brings into the industry. Since governments and medical institutions across the globe have promoted equal treatment methods for a long time, a machine discriminating people over the dataset seems unreasonable. However, the discussion further extends to ‘what can we do to eradicate AI bias in healthcare?’ Researchers and scientists believe that smarter AI could be the answer.

Artificial intelligence and machine learning systems are performing advanced medical tasks such as diagnosing skin cancer like a dermatologist, picking out a stroke on a CT scan like a radiologist and assisting patients at the hospitals like a healthcare worker. However, people were not well aware of the contributions that the healthcare sector is making in the society. All the underlying facts about AI in healthcare came to light during the outbreak of Covid-19 pandemic. The core of AI’s functionalities depends on the data that is fed into the system. AI develops complex models to automate diagnosis and helps doctors to tailor a personalised program focusing specifically on patients. AI in healthcare is expected to reach US$36.15 billion by 2025 with a growth rate of 50.2%. Medical institutions are also enhancing their infrastructure to house any technological trend that could emerge in the future.

AI algorithm is the baseline for implementing technology in healthcare. AI algorithms in healthcare often tend to adopt unwanted biases inadvertently, leading to improper diagnosis and care recommendations. Given the fact that data is where building an algorithm begins, collecting medical data is far more troublesome. Healthcare data is concerned with privacy and security issues. Data is sometimes sequestered for economic reasons. A study found that hospitals which shared data were more likely to lose patients to local competitors. To add up the hindrance, lack of interoperability between medical records systems and technical solutions remain a barrier. However, the problem with bias in healthcare is seen even before AI came into the picture. Since the early days of clinical trials, women and minority groups have been underrepresented as study participants.

Real-world examples of AI bias in healthcare

A Canadian company developed an algorithm to identify neurological diseases. It registered the way people speak and analysed the data to determine the early stage of Alzheimer’s disease. The test results showed more than 90% accuracy in the detection. Unfortunately, the data consisted of samples of native English speakers only. When a non-English speaking person took the test, it would identify the pauses and mispronunciation as indicators of the disease.

UnitedHealth Group’s Optum division developed a product and sold it to hospitals without realising its bias. The system was designed to identify people who require care management. The algorithm used past spending data to predict severity of illness without considering broader societal factors that result in racial inequities in the amount of care received. The data falsely assigned black patients to the same level of risk as healthier white patients. Altering the algorithm to remedy the bias would increase the percentage of black patients receiving additional help from 17.7% to 46.5%.

Smarter AI is the solution

A panel of global researchers chaired a meeting to discuss which requirements AI algorithms must meet to shield against the bias in healthcare. In a nutshell, they concluded that a ‘Smarter AI’ is the only way out. Some of the key highlights of the discussion are listed below.

  • Using data that comes from a social system that already has cultural and institutional biases will bring in inequality in healthcare.
  • White patients are often taken too seriously compared to others, they are treated or investigated until a cause is found, while in other races, it is ignored. This brings to light that society as a whole shows discrimination based on non-white, low-income, lower educated, etc people. But the same amount of discrimination can’t be taken into the healthcare industry.
  • Researchers even suggested that conducting an exploratory error analysis to look at every error case and find common threats, instead of just looking at the AI model is a good option.
  • Humans who are working with the machine in feeding medical data should understand where a possible bias can take place. Models that can make their reasoning understandable to humans could help.
  • The only way out of all the clustered AI bias in healthcare could be a ‘Smarter AI’ that understands the bias and addresses them. Taking in the patient experience, conducting exploratory error analysis and building smarter and robust algorithms could help reduce bias in many clinical settings.

leave a reply