We were recently joined by Dr Hatim Abdulhussein, national clinical lead for AI and digital media workforce at the Directorate of Innovation, Digital and Transformation at Health Education England (HEE).
For his session, Hatim looked at how AI can be used to improve equity. He looked at why tools such as AI and automation are needed from Hatim’s perspective as a practising GP before moving on to discuss the current landscape and opportunities within the NHS with regards to AI.
To start, Hatim highlighted the importance of AI for the future of healthcare. As we move into integrative care, he said, it is important to ensure that the health and care delivered is equitable. Using new and emerging technologies such as AI “needs to be the foundation of what we do and how we move forward.”
A need for new tools
As a GP working in a clinic, Hatim recognised that if you don’t work actively with technology you are “essentially trying to roll a boulder up a hill.” With multitasking as a necessary part of his and every other GP’s routines, Hatim highlighted a need for tools that help him carry out tasks more efficiently, to release time to spend with patients.
“We know that there are certain factors which can contribute towards my decision-making as a practitioner – cognitive biases,” Hatim said. “Therefore I am trying to manage these cognitive biases whilst also making quick decisions within my 10 minute consultation with the patient in front of me. I’m trying to make fast, active decisions, in collaboration with the patient, to improve their care and health. There is a need to regulate cognitive biases, and technology can help us do that.”
Medical knowledge is also rising; Hatim shared an estimation that medical knowledge doubles every 60 days. “That is quite significant – how do I as a practitioner, keep up with this exponential rise? How do I keep up with all the new guidelines and all the new evidence that is forming?
A desire for more actionable intelligence
“There is a general desire due to large data sets and the potential that offers,” Hatim said. “Therefore we need to think about how we can use technology to be able to take action on those data sets, to come up with tangible solutions to improve all of the processes that people go through within their health and care.”
With juggling tasks a major part of any GP workload, and a need to balance personal lives too, Hatim pointed out the need to “manage all of these things better. As life becomes more complex, we need ways to automate some of those simple tasks. There are really good technologies that have been emerging in that space. Whilst we argue that they don’t necessarily fit within the umbrella of AI, because there’s no actionable intelligence, it is still automation. There’s a process that you go through with basic automation and the intelligence within that – we need to think about what procedures we follow and how we can manage this across angles.”
There’s also a need to prepare the workforce for the future, with focus needed on improving medical education and training opportunities. “It becomes more and more difficult for learners when they’re out in placements and when they’re developing their skills, because of the demands and the pressures on deliver to good health and care to the population,” Hatim said. “There is no question that artificial intelligence can help with that, with some of the applications that we are seeing develop in this space.”
Population and precision medicine
“The Topol Review, published in 2019, clearly described that precision medicine and population-based medicine are the direction in which we want to go. Professor Eric Topol described the vision for healthcare that is safe, ethical, rational and tailored to individual. This is something that we want to get to, but we need to make sure we have the right evidence and the right intelligence to get there and bridge the knowledge gap we currently have.”
A diagram at 6:13 on the video below demonstrates how building our knowledge in this area can allow us to progress the ‘data-intelligence continuum’. Data, once collected, can be contextualised, compared, conversed, connected, filtered, prioritised, ordered and framed. This leads to the development of information – equations, ideas, questions, simple stories. From there, this information becomes organised and structured into theories, conceptual frameworks, complex stories, facts; thus the information becomes knowledge. From there, knowledge becomes wisdom in the form of books, paradigms, systems and so on.
By getting to the information and wisdom stages, we can “essentially become a learning health system.”
Hatim also shared the descriptive-to-prescriptive continuum, with a diagram displayed at 06:42. “You’ve got descriptive data, telling you what’s happening in real time, moving onto diagnostic data which explains why things are happening and allows you to predict things – what’s likely to happen based on your diagnostic data? But what do you do with that, what are the outcomes, the decisions, the recommendations?” Reaching the prescriptive stage, based on current data analytics, means using advanced algorithms to test potential outcomes of each decision and recommendations to identify the best course of action.
What do we mean by AI and machine learning?
Hatim noted that the Topol review defined AI as a “broad field of science encompassing not only computer science but also psychology, philosophy, linguistics and other areas. AI is concerned with getting computers to do tasks that would normally require human intelligence.”
It is important that we don’t forget the wider impact, however, Hatim said. All of those areas such as philosophy and linguistics “contribute to the role of AI, but ultimately it’s computational, it’s mathematics. It may never be able to do tasks required beyond the level of human intelligence, but it can certainly do tasks to support and augment the role of human intelligence.”
Hatim noted that within the realm of AI, you have machine learning, and deep learning within that.
“Machine learning allows individual learning of complex sets of different rules and the training of model data sets,” he said. “Eventually you get to a point where machine learning, because of the neural networks that have formed, can start to perform deep learning and perform tasks with a more general purpose.”
Next, Hatim moved on to discuss some of the challenges that AI faces in healthcare, and suggested some ways to tackle those challenges or new ways of thinking that could help.
Data
“The biggest challenge is the data challenge,” Hatim said. “We know that data and particularly health data is hosted in multiple different areas. How do we pull that data together? How do we make it more structured? How do you make it interoperable across different bodies of different data sets from different places?”
To overcome the healthcare interoperability challenge, he said, we need to consider all barriers in terms of governance, outdated systems and privacy and security challenges.
“We did a piece of work in my practice recently where we reviewed the way we code our data,” Hatim shared. “We thought about how we were actually missing many opportunities to code data through the information that we receive – hospital letters, for example – which would enhance the knowledge that we have about patient care.”
Another challenge can be keeping data sets up-to-date, he pointed out. “Often, we don’t look at patient details. We don’t look at a patient’s problem list and refresh it, which then poses challenges when you try to take that data out to gain further insights.”
It is a complex area with regard to location and format of data, and the regulations and requirements to keep it safe, Hatim noted, and can be challenging to navigate. But, he said, “That is starting to happen with the development of secure data environments. It gives us a burning platform to start to move forward on this agenda.”
Technology
Technology must be explainable and
This requires investment, he added, and the strengthening of priorities on where efforts should be focused around AI.
Workforce
“The other key barrier is the workforce and my role is specifically focused around workforce,” said Hatim. In general, research indicates that the majority of the NHS workforce along with the wider public are not all that familiar with automation and AI in healthcare.
Hatim shared results from the Health Foundation’s ‘Switched On’ study, which indicates that certain health and care professional groups are more likely to be familiar or positive towards AI, such as doctors and dentists.
“Other groups may be less positive and therefore that presents us with a challenge around not having a workforce that is aligned and joined up. This could lead to equitable challenges in terms of the care that’s delivered.”
Familiarity helps, Hatim said. “The more people understand AI the more likely they are to see the benefit of it and recognise the fact that it can actually augment the work that they can do. It’s important that we work to increase people’s familiarity with these increasing technologies.”
The biggest challenge in this area, Hatim noted, is the numbers of people working in this area at all. “We are moving towards a data desert, if we don’t increase the number of people in digital, data and technology roles,” he said. “They can be from clinical or non-clinical backgrounds – the important thing is that we invest in this space, to ensure we have a workforce capable of using digital and data to build a health and care system for the future.”
Legal and ethical
For patient-facing healthcare professionals, making decisions with patients sitting in front of them, there are also legal and ethnical factors to consider.
“As a GP I have traditionally made decisions based on prior learning and training experience, based on demographics, imaging – all the information that I have available to me,” Hatim said.
“Now we are introducing a new model, a predictive AI model that is going to give me advice and support in terms of what I want to do next. Therefore there is a need to think about how this impacts my clinical decision-making. What are the legal and ethnical barriers around that? It’s important that we explore that and have conversations about that. It’s down to the individual clinician working with technology to be able to have those skills.”
AI within the NHS
In recent years, there has been a “huge investment” in AI within the NHS, Hatim said. “We had the formation of NHSX, at the time, and the NHS AI Lab which still exists today in the NHS Transformation Directorate. We had the formation of the Artificial Intelligence in Health and Care Awards, where funding was given to technologies that are working to develop AI for health and care purposes at different levels.”
Building on the Topol review, HEE decided to take a look at the current landscape of AI and data-driven technologies in the NHS and produced the ‘AI Roadmap: methodology and findings report’. The report sought to examine taxonomies, the spread of adoption, the potential workforce impact and more.
Initially, work was put into understanding current data sets. Hatim described how data sets such as the NHSX State of the Nation AI survey and NIHR Horizon Scan were examined to explore associations between parameters and to document gaps and limitations to the evidence available.
“We developed a database which then broke down these technologies into taxonomies and templates, and thought about the impact on the future workforce as well as populating the database.” This led to the formation of an interactive dashboard and a selection of case studies and interviews with innovators and NHS staff, and to the writing of the overall report.
“We found that the majority of the technologies were diagnostic – 34 percent were in that space, which probably isn’t surprising because there’s been lots of press attention and evidence given on technologies helping in imaging.”
The report found that a further 29 percent of AI technologies are in automation and service efficiency; 17 percent in P4 (predictive, preventive, personalised and participatory) medicine; 14 percent in remote monitoring; four percent therapeutic; and two percent were classed as ‘other’.
On remote monitoring and P4 medicine, Hatim commented: “This really interests me, especially when we think about health equity. They really allow us to help keep people safe at home and help manage people in the community. They will also help us to deliver care that fits that definition of being safe, ethnical, rational and tailored to one individual. The growing emergence of technologies in that space should be really looked into in terms of harnessing them and building further evidence and support, so that they can be adopted at scale.”
The report found that around 80 percent of these technologies are ready for large scale deployment within the next five years. “That’s fascinating – it shows that we really are not far away from AI having clear impact on the NHS widely.”
Looking at the workforce groups that AI is most likely to affect, the report found radiology, general practice and administration at the top of the list. “Ultimately, these are technologies that are going to have some impact on the way that everyone works,” Hatim said. “It’s for us to think about how we can best use a technology to work within the skills we have, but also think about how we can deploy our skills elsewhere and free up time.”
Finally, Hatim highlighted the roadmap’s section on AI spread. The report indicated that the majority of AI use is clustered around big cities. Hatim said: “We know that some of the biggest challenges we have around health equity are often in our rural and coastal communities. We are recognising that a lot of the technologies are building hubs within the big cities, but they have a role to improve care across the region… I think ICSs do have a responsibility here to look at what’s available within their patch and support that adoption, spread or use across rural and coastal communities.”
Health equity
Hatim pointed out a need to understand the background of health equity so that the core concepts can be applied to AI.
He shared a diagram highlighting the differences between equality, equity and justice, available to view at 20:42.
Justice should be the overall aim, he noted. “You want to build a society where people don’t need support in the first place – where the cause of inequity has been addressed, a systemic barrier has been remove. That is a greater challenge, but it’s important to remain aware of it.”
On social determinants of health, Hatim pointed out that a person’s own condition, the system around them, and individual factors such as age and sex do affect a person’s health, “there are much larger aspects that we need to think about in terms of wider society – the living and working conditions we have, the impact of unemployment or the environment that you work within.”
Therefore, when thinking about how to design and change healthcare with technologies, Hatim highlighted a need to think about the principles of the quadruple aim: to improve population health, reduce the cost of care, enhance patient experience and improve provider satisfaction.
“We need to think about how we tackle those issues and build a workforce or system that really enables the workforce and prevents burden and
A really useful framework to apply when considering the use of AI and technology and its role in reducing healthcare inequalities, Hatim continued, is Core20PLUS5. This framework offsets target population (the most deprived 20 percent of the national population as identified by the Index of Multiple Deprivation, along with ICS-chosen population groups experiencing poorer-than-average health access, experience and/or outcomes) against key clinical areas of health inequalities. The details are available to view at 29:02.
There are also communities that will be missed through data, he added, and where technologies are being used there is a need to ensure that people are not excluded from having access. Hatim emphasised the need to “make sure that they have access, and also make sure that we have the right data and information to be able to build solutions that really support inclusion for them.”
At HEE, work is ongoing around building confidence in AI, “to have clear policies and guidelines and regulatory standards that we know we can uphold and feel confident in,” said Hatim. “At the same time, at local level, we want to make sure that we have the right level of validation there – that we know the technology will be right for the population we are serving and there’s a culture within the organisation that will allow those policies to be embedded, used and implemented appropriately.”
Hatim emphasised a need for pro-innovation attitudes. “There’s no reason to form barriers – otherwise, we’ll get even more of a postcode lottery within the NHS wherein certain technologies are available in one place but not in another. We need to make sure we have clear pathways in place to build real-world evaluation and evidence in the local environment.”
Finally, Hatim touched upon the clinical use layer. “We need to make sure that we can create the right comfort for both the person using the technology from a clinical background, but also the person on the other end,” he said. “We need to be sure that we are supporting the patient to be able to understand how the technology is being used, the impact it is having and what it is leading to.”
We would like to thank Hatim for sharing his time and thoughts with us.