Now

Panel discussion: will the reality of AI live up to current hype and how do you manage bias in healthcare data?

For a HTN Now panel discussion this month we were joined by panellists including Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at HealthAI, the global agency for responsible AI in health.

The session explored topics including what is needed to manage bias; what “responsible AI” really looks like; how to ensure AI is inclusive and equitable; how AI can help support underserved populations; the deployment of AI in the NHS; and the potential to harness AI in supporting the shift from reactive to proactive care.

Starting off with the introductions, Puja highlighted her background as a public health specialist, and her current role with MHRA’s real world data research service, the Clinical Practice Research Datalink (CPRD). On how her role links in with AI, Puja said: “The AI revolution is going to be fueled by data, and we’re constantly getting asked whether our data is ready for AI. Part of my role focuses on supporting regulatory science, and a lot of those aspects are around things like AI validation, concept drift, synthetic data, AI explainability, and so on.”

Up next was Shanker, a GP by background, whose role now covers the London region for NHS England. In his role, Shanker explores the role of digital in primary care, noting that AI and automation are “big” topics in this space at present.  Part of his role, Shanker shared, involves examining different pilots, vendors and viewpoints to see what impact they could potentially make in primary care.

Finally, Ricardo explained his work as CEO at HealthAI, a not-for-profit organisation dedicated to expanding countries’ capability to regulate AI for healthcare. He shared some of HealthAI’s experiences as an implementing partner for the WHO along with other international organisations and governments, helping them build regulatory capacity around the validation of AI-driven tools for health systems.

On his own background, Ricardo said: “I’m a physician trained in infectious diseases and worked in the Portuguese NHS for several years before being elected as an MP, where I served in the national parliament for four terms. From there I founded a non-governmental organisation called the UNITE Parliamentarians Network for Global Health, which became a network of legislators from more than 110 countries around the world, committed to science-based policymaking in the field of health.”

The role of AI in healthcare

What potential do Puja, Shanker and Ricardo see for AI in healthcare?

Ricardo talked about the importance of figuring out the “why” for using AI, prior to implementation. “There’s an attraction toward shiny objects, and many institutions are adopting AI because it has become the buzzword, without doing the necessary thinking about why this technology,” he reflected.

When it comes to healthcare in general, he continued, there is a need to “rethink how we can shift from the current disease-driven model toward one that is proactively focused on health, wellbeing, and quality of life”.

He raised a challenge with the current model of care in most of the richer countries around the world, calling it an “industrialised healthcare model whereby we incentivise disease, which leads to a burden of disease and a vicious cycle of rising costs, and therefore to a lack of sustainability.” The outcome of this focus is a two-tiered health system, where only those with resources are able to get access to care, in a way that he called “exactly the opposite to the principle of universal health coverage that we all advocate for”.

In Ricardo’s view, AI technologies have the potential to help put into practice the ideas of value-based and data-driven health systems that can be focused on prevention and timely diagnosis; but to get to that point, there is a need to make sure the governance model used for development and deployment of AI is based on design thinking and the principles of responsible AI such as fairness, inclusivity, accountability and transparency.

He recommended a book called ‘Power and Progress’ by Daron Acemoglu and Simon Johnson, which looks at the emergence and evolution of technologies over the last thousand years. “The very few cases where technologies have led to a universal benefit have been when they were designed from the start thinking about that benefit. We’re at a very early stage of AI, where I think we still stand a chance to do that for healthcare.”

From his perspective as a GP, Shanker emphasised the potential for AI in helping to overcome challenges such as increased workload and demand, as well as balancing the complexity that accompanies patients living longer. “It also comes down to how we improve the patient experience, so they get what they expect from the NHS whilst we’re struggling,” he added. “AI tools can really help with things like patient safety and experience, as well as supporting efficiency.”

Puja picked up on Ricardo’s point about “not using AI for the sake of AI”, but instead looking at challenges and what solutions are available.

“We were recently involved in running a challenge alongside the FDA and the Veteran’s Health Administration in the US,” she said, “where synthetic data was made available. We invited the wider research and AI community to develop algorithms that could predict cardiovascular outcomes.”

The outcomes in accuracy and performance, she explained, showed that “some of the standard statistical models like logistic regression actually performed really well. Those models have the advantage of being inherently transparent and better understood; from an end-user perspective, if an AI algorithm presents you with a result, you need to have some understanding about what logic it has used. Then, as a clinician, you can feel confident in accepting the results.”

Building a foundation from solid and reliable data

Shanker talked about the importance of understanding the potential for AI to “magnify biases that already exist in health data”, whereby an AI tool is “fed lots of data which is not necessarily representative”. This brings to light the need to consider the sources of data which are used for the development of AI solutions.

In the healthcare system, he continued, there should be consideration of a potential framework or governance that should be developed in this area, along with an audit trail to ensure transparency around the types of data fed into an AI solution.

Agreeing on the need to recognise the potential for bias in AI, Ricardo said: “There is inevitably bias in humanity, so technology is going to reflect that. I don’t think we can expect zero-bias technology in the future, but what we can expect is for us to do everything in our power to mitigate that risk.”

One factor in this, he suggested, could be “stronger post-market surveillance mechanisms toward ensuring that whatever the technology is delivering is aligned with the promises that have been made to regulators”. He pointed out that this would help in building trust towards the adoption of these technologies, which would in turn support both patients and suppliers.

Pointing to another potential challenge, Ricardo noted the need to take into account the context in which AI technologies are deployed, as the way solutions perform in one particular region or country “may not be the case if you export that same piece of technology without adapting the training and the datasets”.

He acknowledged: “I don’t think anyone has the magic bullet yet to solve all of these open questions. But that’s why now more than ever, we have to work together as a community.”

Puja agreed on the need for stronger transparency around what data has been used to train AI algorithms, along with which subgroups might not be represented in the data, and why the data was collected in the first place.

She also outlined guidance developed by the MHRA, the US FDA and Health Canada on principles for good machine learning, which highlights the importance of having multidisciplinary teams right from the design stage when developing AI, and other principles that are also elements of  responsible and ethical AI.

Looking at another aspect of AI development, Ricardo suggested that AI technologies and solutions should go through a regulatory pathway “just like with pharmaceutical products”. Additionally, he said, an incentive model should be developed for the private sector to “want to go through the regulatory process; to be willing to do so; and to help fund the needed resources to support the regulatory capacity that will be needed to keep pace with emerging technologies”.

He observed that in the UK, these kinds of investments in regulatory and innovative sandboxes are “already happening,” and shared the view that they will be “critical moving forwards”.

The capabilities of “any regulator in the world” to keep up with the number of technologies reaching the market, however, could be an issue, and Ricardo’s belief is that “we will probably have to have an agreement where developers have an obligation to notify regulators of potential adverse effects of their technologies based on real-world data, and then regulators should have a sampling process to do an assessment”.

Inclusivity and equitability in AI technologies and solutions

Our panellists moved on to consider ways of ensuring inclusivity and equitability in AI technologies and solutions, with Shanker highlighting two important questions: whether users can use AI, and whether that AI is inclusive for the patient population it is serving.

“We have a whole spectrum of workforce,” he said, “and those of us in this webinar are probably not representative of the whole NHS workforce, because we have a particular interest in AI. There are many sections of our workforce who probably have very limited awareness of AI or even the scope or definition of AI, which is a huge challenge.”

Shanker also considered that the real success of AI may be after “the hype dies down”, adding: “I think AI will be successful when we don’t refer to it as AI, but just as something that we are doing, something we are used to. It’s not an AI tool, it’s just a tool. These are the simple things that we need to do, to bring it down to a level that our staff can engage with.”

Puja placed the spotlight back on data bias. “AI is a tool, and it’s up to us how we want to use that tool,” she said. “So if we want to serve underserved populations, we need to factor that in at the design stage, and ensure that population is represented in the training data used for the AI.”

Looking at regulation, Puja also recognised that whilst manufacturers of AI algorithms classified as a medical device would be subject to conformity assessments and requirements for post-market surveillance, there is a role for “constant evaluation in practice”. This, she said, could reveal “little quirks such as differences in EHR coding, which could make a difference to implementation” when the AI tool is deployed in healthcare.

The work starts at the very beginning, emphasised Ricardo, when it comes to redesigning the system and “making sure that by default the whole system is directed at preventing disease, promoting health, and lowering the burden of disease”. To make this happen, he continued, goals must be proactively defined, along with the metrics that you want to reach. “If you want to lower the percentage of people being diagnosed with diabetes or hypertension in your community, for example, you need to have a baseline assessment of that.”

Reflecting on his experiences of working in infectious diseases, Ricardo shared that most of his patients at the time were “living at the margins of society” and therefore often do not attend formal healthcare settings. He emphasised the power of big data and analytics to “make sure we’re reaching those that can potentially benefit from interventions and using the power of AI to support them in getting access to care”.

Often in such cases, it can come down to whether we prefer to deliver “no care, or AI-powered care”, he stated, ” and that is a very challenging issue, since we have the principle of first do no harm in medicine. But I think it is critical that we assess and compare the absence of care to the care that we can deliver with this human-in-the-loop approach.”

How can AI support a move from reactive to proactive care?

Ricardo stressed the importance of setting incentives and clear goals to encourage development in the right way. “AI opens the door for more personalised, precision medicine, and will allow us to be much more efficient at the individual level,” he said. “In my view, it is making sure that we have less of a trial and error approach, which has been the standard of care and medicine for centuries.”

He also noted the potential for AI to “end the need for therapeutical guidelines – because if we get to a point where every instance of patient care is based on their specific traits and needs, we shouldn’t need broad approaches anyway. At present, we are using a shotgun when in the future we could be using a laser-focused approach.”

To take advantage of these opportunities, we need to “create an environment where in a multi-stakeholder approach we allow space for innovation,” Ricardo reflected. “I think the NHS offers the perfect ground for facilitating that, precisely because you can test out different solutions and quickly scale them if they are truly effective.”

Shanker considered where the NHS should focus efforts in the coming years and the potential to accelerate adoption of these technologies. “We need to avoid recreating the wheel,” he said. “We don’t want to have to go through the same governance processes and the same checks with different suppliers. We can save the resources that would be used going through those steps.”

Post-market surveillance is also integral,  Shanker noted. “Products are rapidly evolving, and we need to consider whether they’re still acceptable. The digital pathways framework has been cancelled, or significantly delayed; but I think there’s a hope to revisit it now. If products were on frameworks it would reduce some of those issues, and it would mean we don’t have to reproduce our own internal governance and database deployments.”

Puja commented that learning is ongoing when it comes to AI. Whilst we may not have all of the answers yet, she pointed out that knowing the questions and the issues will help in providing us with a roadmap for where we need to go.

Expanding on that roadmap, Puja said: “First is building the infrastructure and the skills – we need an AI-enabled infrastructure for collecting healthcare data in a way that it can be used to train AI algorithms, and we also need to be able to translate it back into those EHRs in a seamless back-and-forth.”

End users will need to have at least a basic understanding of AI tools and their uses, she continued. “There are some interesting conversations happening now about human-in-the-loop, and whether we should instead have AI-in-the-loop; where AI is serving humanity and informing human decisions.”

Ricardo cited the example of the European AI Act, covered by HTN here. “Halfway through that process, when they had assessed the risk of large language models as low risk, ChatGPT was released,” he shared. “That led to them reconsidering and changing them to high risk”. This shows that we “need to have some humility, as the way technology is evolving in this space is showing exponential growth. It’s extremely important that each country is capable of building their own capacity, be that in regulation, governance, IT infrastructure, or human capacity.”

Coming back to Puja’s point about AI-in-the-loop, Ricardo agreed that this consideration is an important step, but “in health informatics at hospital level in many parts of the world, typically it’s still focused on building networks and fixing computers. The needs of today and the years to come are going to be much more complex, and will demand much more knowledge in the AI space. We need to focus on building that basic foundation so we can be better prepared to make the most of the evolution of the technology.”

AI to reinforce and complement humanity, and keeping the patient at the centre

Offering a closing comment, Ricardo cited a study published in JAMA which compared a chatbot with a clinician in giving health advice to patients, and which found that when patients were asked about empathy shown, the robots were considered to be more compassionate. “This is clearly shocking,” he reflected, “as what makes us truly human should be the compassion and empathy that we bring into the health system.”

He noted the heavy burden on hard-working members of clinical teams, and encouraged the use of AI to “let go of mundane tasks so that we can reinforce humanity and healthcare, and so technology can actually open up the way towards further compassion. To do that, as we have discussed, it needs to be done by design from the start; so that’s a call to action to everyone with responsibilities in every health system.”

Puja closed by saying that whilst AI is a “buzzword” right now, it is important to “start with what you’re trying to achieve, and then consider whether AI is the tool needed to accomplish that”.

The human-AI interface is also important, she said, and highlighted research into the differences in the ways AI and human beings make decisions. “AI tends to be very much black and white,” she said, “whilst human thinking is often much more clustered in the middle” Therefore, something to think about for the future is how the human-AI interface can be optimised so that humans and AI can “complement each other’s performance, rather than trying to fight that one is better than the other”.

Finally, Shanker emphasised the importance of keeping the patient at the centre of the process, and of building patient safety into systems.

“We’re all passionate, but we need to think about the things we can do today that will build the foundations – the simple skills or simple things we can start with,” he concluded. “We don’t have to wait for the perfect tools to come along, we can start that journey now.”

We’d like to thank all of our panellists for their time in participating in our session today.

We have plenty of HTN Now webinars lined up for the latter half of 2024, so be sure to visit our events page to register for any sessions you’d like to attend.