Voice

HTN Voice: Healthcare’s AI toxicity problem

By Dr Pritesh Mistry. Pritesh draws upon his experience leading innovation initiatives in primary and secondary care co-developing solutions with clinicians, researchers, industry and patients at St Thomas’ Hospital and Royal College of GPs to provide insights on innovation in healthcare. Read weekly articles on his blog here.

The narrative and framing of artificial intelligence in health and care risks creating a toxic culture reducing healthcare to a series of transactions removing the human element.

If AI is going to be accepted in healthcare we need to change the language and culture that is being cultivated around AI in health and care. The majority of  conversation and wording compares AI to healthcare staff and often making unflattering claims of AI superiority. Not only does this create divisions but there is a risk the legacy of AI will be a toxic culture in healthcare.

The human gold standard v AI doctor

One of the barriers to adoption that is rarely discussed is cultural acceptance of a technology, tool, or process. This is especially difficult when it comes to tools like AI which have outsized expectations; much of the material we read on AI are seeped in hype with an evidence base that is opaque and often unreproducible. Also the term AI itself is confusing, AI encompasses multiple approaches from natural language processing (NLP) to adaptive software (machine learning). This mismatch in expectations, terminology and evidence leads to disappointment and confusion which makes those initially interested quickly disengaged and no longer involved in the conversation. This is a significant problem because we need AI to be co-created so it operates within clear bounds and acceptable risks with reproducable evidence so clinicians are comfortable to work with AI tools.

There is an ongoing narrative of the “AI Doctor” and AI replacing doctors (for example AI is better at identifying tumours than a doctor or AI is better at diagnosis than a doctor). These types of publications comparing AI with a clinician makes us think of healthcare as a transactional process. This is obviously a huge simplification and risks framing the role of the clinicians as someone completing a series of single tasks but a patient often receives a variety of things through a single engagement with the clinician including diagnosis, treatment, comfort, empathy, support, assurance and many more. Unfortunately this type of narrative also ignores the multiple different professions that work in unison to care for patients.

Apples and oranges

The doctor versus AI is perhaps creating the biggest cultural challenge AI tools being used. What we are seeing is a “typical” clinician being held as the gold standard but measured under specific circumstances and specific metrics which are often not real world representative. The language used isn’t AI is better than all doctors but AI is better than some. What we are opening the door to is  claiming that some doctors are underperforming compared to peers, it is no secret that there are variations in care however healthcare is complex and variations can have multiple sources. This is a slippy slope that leads to metric based performance measures and a toxic blame culture where we measure single clinical tasks and correlate these to good patient care. However correlation isn’t causation and as mentioned above a clinician does much more than a single task and a consultation is a complex interaction fulfilling multiple jobs.

There are also discussions on bias, often the question raised is what’s the AI bias? But this question ignores the existance of any bias that exists in current health and care approaches.

In reality it is possibly irrelavent if a doctor or AI is better at a specific single task. Instead what is important is how does the AI+clinician work together in a care pathway and what is the improvement against the existing pathway. If there’s bias its possible foreknowledge and suitable training enables clinicians to mitigate bias. Any AI that is developed and used today will be a narrow focused tool. It will not be able to be a general purpose wide application software that is adaptable like a person – this type of AI (called Artificial General Intelligence) is still someway off development.

Collaborative AI

Multiple challenges face health and care provision including insufficient workforce numbers and clinicians who are heavily overwhelmed with expectations and experiencing burnout. Technology needs to be created that improves the patient pathways and supports staff not creating division. Culture is an integral part of this, the current framing of AI has the potential to create a culture where individual clinicians are labelled as low performing which can create a blame culture and exacerbate existing challenging conditions. Healthcare is complex and  we need to move towards thinking about improvements in every care pathway through the clinicians+AI combination not pitting one against another.

 

— HTN Voice is a new channel, curating guest articles and opinion from across health and care. To participate please email marketing@htn.co.uk —