News, Video

Video: The dangers of bias in artificial intelligence and how it adversely affects communities

At our latest HTN Now event, we welcomed David Newey, Deputy Chief Information Officer at The Royal Marsden NHS Foundation Trust to talk about bias in Artificial Intelligence (AI).

Considering the recent advances in studying and utilising AI within the health and care sector, David discussed a hugely topical issue, covering areas such as bias and inequality in healthcare, examples of AI use, types of bias within AI and in the choosing of models, as well as how to eliminate bias, how to design with equality in mind, and how to be an ally in the digital world.

Explaining some of The Royal Marsden’s work as a biomedical research centre, David began the session by outlining developments in AI. He noted, “We are doing a considerable amount of work within AI at the moment, particularly in regards to the use of artificial intelligence in assisting with cancer diagnosis. Most recently, we developed a natural language processing algorithm to look back through historical data, to aid in research conclusions, and by parsing through narrative text rather than structured text.”

“We’re just about to kick-off the implementation of an Epic EPR (Electronic Patient Record), which will go a step further in terms of increasing the amount of structured data we capture along the patient pathway,” he added.

“I wanted to give a talk about how we need to be conscious, particularly in developing artificial intelligence, so that we don’t hard-bake in any attitudes or any unconscious bias within the development of algorithms – so that it doesn’t, in the longer term, preclude any particular community from receiving equality of healthcare,” David explained.

He went on to elaborate: “When we look at biases in healthcare, and we look at particular characteristics that impact the provision of healthcare and the equality and equity of healthcare delivery…there is considerable evidence out there, based on research, that these particular factors all play an important part of life expectancy and the amount of healthcare that’s received.”

The characteristics David highlighted included: sexual identity, geographic location, socioeconomic status, obesity, age, ableism, racial bias, education, and gender.

For one of his examples of inequality in healthcare, David cited statistics from the British Medical Association that showed Black and Asian women are five and two times more likely, respectively, to die during child birth than white women, in the UK.

Types of biases in AI, David explained, include: implicit bias (unconscious prejudice), such as when developing algorithms; sampling bias (where data may be skewed to specific sections of the community and when applied to other sections of the community may not have been tested against that use case); temporal bias (a training model that becomes obsolete due to future events not being factored in), over-fitting to training data (when models predict from the training dataset but cannot predict new data accurately); edge cases and outliers (data points not within the data’s normal distribution or errors/irrelevant datasets that could negatively impact learning process).

David then ran through some real-world examples of when AI biases has led to inequality – in areas stretching from employment to healthcare.

In terms of healthcare, David raised an example of when several algorithms were created to help with skin cancer prediction: one was 94 per cent accurate overall but 99 per cent accurate for men and only 89 per cent accurate for women. While a second algorithm was less accurate at 91 per cent overall, but more accurate than the first algorithm for women at 90 per cent and less so for men at 92 per cent. “What we can see,” David added, “is that a decision was made at some point that the accuracy for men was more important than the accuracy of the algorithm for women…and that’s an example of where bias is being introduced into the overall determination of those solutions.”

On human interpretation, David raised a study by Colombia University which explored diversity in data science teams and programmers to be key in reducing algorithmic bias.

To design for equality instead, David highlighted the need for: taking unequal access to technology and resources into account, as well as discriminatory healthcare processes, and biased clinical decision making; looking at the data, including how to take away discrimination in training data and constantly reviewing algorithms; ensuring correct and appropriate design and deployment practices around inclusion when developing algorithms; and not disregarding or deepening digital divides or exacerbating global health and wealth inequality.

Best practice he said, includes making sure clinical study participants and data sets are representative of the intended patient population, that training data sets are independent of test sets, that algorithms are tested during clinically relevant conditions, ensuring inclusive design, and making sure the algorithm is developed for diverse groups and not just one demographic.

To be an ally in the digital world, David advised: raising awareness amongst under-represented teams; recruiting and training a diverse team; providing challenge and oversight around training and testing algorithms; and ensuring governance is in place to undertake regular performance reviews against algorithms.

David concluded: “AI is now starting to impact society across the whole spectrum of the way we live our lives…we need to make sure that we stop the development of bias in these algorithms right now – and that we don’t close our minds and eyes to these but actually address it. Over time it will snowball but, at present, we’ve got time to address this and do something about it.”