Interview

Interview: Dr Hatim Abdulhussein on artificial intelligence in healthcare

We were joined by Dr Hatim Abdulhussein for our latest interview. Hatim is National Clinical Lead for AI and Digital Media Workforce at the Directorate of Innovation, Digital and Transformation at Health Education England (HEE), in addition to his role as a GP.

Hatim shared some of the AI projects he is working on, his views on the adoption of AI in healthcare, the role of regulation and governance, and more.

To begin, Hatim introduced his role, his area of work and his interests.

By background I’m a GP, I work in a practice in North West London.

Over the last year and a half, I’ve been leading Health Education England’s Digital AI and Robotics in Education programme. The real aim of that programme was to learn from the findings of the Topol Review in 2019, around the emerging types of technologies that would impact the workforce within the next 20 years, and to deliver some of the recommendations around education and training for these technologies. Ultimately, if we have safe, ethical adoption of technology, then we can improve patient care.

My role is to understand what technologies are emerging and think about what the workforce impact might be from those technologies. When we talk about workforce impact, it’s about understanding how the way we work will change, and how we might need to change the skills that we have as a result of working with technology.

At the same time, it’s important to reflect on the technologies that we are already using in practice. As a GP, I now use a variety of technologies in my practice, from clinical triage tools to e-consultations to telemedicine tools. So what does that mean for me – how do I need to develop my skills in order to work with these technologies as effectively as possible? There’s still a gap in education and training processes and curricula in this area, so it’s about understanding how we can plug that gap. The first step is to understand what people need to know, what their learning needs are. That’s a key part of our work. Off the back of that, you can start to deliver education and training.

I got interested in this space because I’m very passionate about improving patient care. Technology is one aspect of how we can do that. But I have clearly seen throughout my career as a doctor that although projects can be handled with the best of intentions, workforce impact or workforce preparation aren’t always considered. A lot of the time, technology gets implemented but isn’t successful because that engagement work hasn’t been done.

There are different aspects to the responsibility here. It’s the responsibility of those who are making those decisions to implement technologies, to ensure that they have a good plan for workforce education and training. It’s also the responsibility of the healthcare professionals to ensure that they are digitally literate and able to engage appropriately with changing technologies. And it’s the responsibility of our educators to make sure that the necessary learning exists in the system for people to access.

That’s what drives me within this role, to bring all of those aspects together and best support our system to do that.

The role of AI

A lot of our focus has been on the workforce impact of AI technologies, and the policy towards how we develop the education strategy around AI.

In January, HEE published something called the AI roadmap, which is a stocktake of current AI technologies in the NHS in terms of self-reported data from various datasets like the State of the Nation survey and the Innovation Pipeline. That allows us to understand what technologies are out there and what taxonomy they sit in; for example, are they diagnostic or are they focused on service efficiency? Are they driven towards remote monitoring or population health? The roadmap helps us dig deeper to understand how they will affect the different types of workforce groups. We’ve done a couple of case studies to explore this, one in lung imaging and one on mental health wards, to really develop our understanding of how the technologies in question changed the way that people worked.

The roadmap has been well received, it provides a lot of clarity on the current spread of AI technologies within the NHS. It highlights where we need to go further and it also shows where we need to think more specifically about workforce preparation.

We followed the roadmap with a report called ‘Understanding healthcare workers’ confidence in AI’. That’s a really important piece of work because it allows us to start to etch out the challenges or the barriers that we need to overcome to get to a point where our healthcare workers are confident with AI. It talks about what will drive confidence in AI at a national level, at a local level and for individuals. That’s been a really interesting piece to do with different stakeholders, to bring their expertise together and enhance the conversation in this space. We hope to follow this up with a piece around educational needs as a result.

Education for clinicians 

We need to consider that there are different aspects to our clinical workforce that will have very different needs. Not everyone needs to be a data scientist or an expert in AI or digital health, but there are baseline educational needs that everyone does need to have to be able to work in a digitally thriving, transforming environment.

At the same time, we know that there is a significant shortage in the digital, data and technology workforce. Many of our clinicians who have interest in these areas would be pivotal to driving forward digital transformation in the NHS if they moved into some of these roles alongside their clinical work. So we have to think about how we can help that aspect of the workforce develop their skills in order to work towards more specialist roles around digital and technology. We need to think about what we can put in place to prepare that specialist workforce, and also how we can drive up that future career pipeline.

The aim is to have clear educational programmes that meet the needs for those disparate groups. That includes educational materials that are available for all, to increase your baseline knowledge, and educational materials and opportunities to help drive the careers of those who want to have more specialist roles in these areas.

We’ve seen some that already in terms of the NHS Digital Academy and the Digital Health Leadership Programme that sits within it, and the Topol Fellowship Programme, which I was involved with co-founding a few years ago. These programmes provide some steps in educating clinicians to become part of that specialist workforce.

The baseline skills education has a bit more of a gap, I think. Some of that will involve working with our educational bodies and stakeholders in those areas, to embed this learning into curricula. Perhaps the learning can be included in college or in higher education, to ensure that these skills are embedded as early as possible in people’s careers, so that when they are working in practice they are ready to work effectively with technology and they understand the core concepts around digital literacy and inclusion.

The adoption of AI in healthcare

There’s a fantastic report that was published by my colleagues at The Health Foundation called ‘Switched on’, which looks at public and NHS staff attitudes towards AI. The first thing this report found is that the more familiar a person is with the concepts of AI, the more likely they are to be positive about it. It also looks at different workforce groups and generally speaking, it found that most workforce groups attitudes tended to be negative when this research was done. But again, the more familiar workforce groups – for example, doctors and dentists – tended to be slightly more positive towards AI and data-driven technologies. So that clearly shows that there is a key piece of work that needs to be done around increasing understanding to drive familiarity and ultimately drive confidence and positivity around its adoption.

I think the biggest struggle is for people to see examples of AI in practice. Unless you see it being used, it’s still a term that feels a bit ‘pie in the sky’ for many people. That is really important in terms of being sure that we communicate where it is being adopted and how, and its impact. That will help people to understand that it is very much a real concept and it is being used in practice.

In terms of where it can be used, I think it can definitely help to plug some of the challenges that we have with our healthcare workforce and processes. As a GP, there are many things in my practice that could really benefit from support from AI and data-driven technologies. There are a lot of things that could be made more efficient and productive. AI could be a way to tackle those problems, from managing patient registrations to predicting disease to triaging consultations to the right person at the right time. It could help to free up time for our healthcare workforce to spend on direct patient care.

So if we can start to show some really clear use cases and highlight the benefits in terms of supporting the workforce, particularly in areas such as clinical effectiveness or service efficiency, then I think we should start to see more positive attitudes towards AI.

Regulation and governance 

I think our regulation and governance is absolutely fit for purpose. We have quite clear regulatory bodies here in the UK that have clear responsibilities; the infrastructure is there and there’s a strong understanding amongst these regulators that regulation and governance of AI and data-driven technologies is going to be key to healthcare services in the future.

I’m pleased to say that we’ve had really positive and collaborative experiences with the likes of NICE and MHRA. The work that’s been done to set up the multi-agency advisory service is excellent, because it really becomes a port of call for people to understand the current landscape of how you go about regulating an AI or data-driven technology, and what standards it needs to meet before it can be used in practice. This is useful both for those who are developing these technologies, and those who are implementing them.

I think there’s an understanding that this is important, that it needs to be present and clear, and it is supportive for the people who are trying to drive the uptake of AI in the NHS.

Of course, it’s at an early stage, so we need to be mindful of that. The technology is at an early stage too, and we’ve got some early use cases, so I’d say let’s move forwards with that, and make sure we move everything at the right pace.

Developing confidence in AI

In our report on healthcare confidence in AI, we talk about developing confidence on three levels.

The first level, on a national scale, is having that understanding that an AI technology has the right evidence base, has been looked at from a regulatory and standards perspective, and is robust enough to be used and be effective in practice.

The second layer of confidence is at a local level. At this level, you need to know that it is has that evidence base, but crucially, can that evidence be generalised to your population? We need to be especially careful of that with AI because it’s based on data and that data must be representational of the population that you serve, so local validation is extremely important. It’s also important to make sure that you have the system, culture and processes in place to be able to effectively locally implement something – being confident of those processes within your organisation is going to be key to driving clinician confidence.

The third level of confidence is individual – understanding the impact of the technology’s use on a person or patient. It’s about understanding the impact that the technology can have on your decision-making, the types of biases that you need to be aware of, and ensuring that the information the technology provides is adequately representational. You need to be confident that you can use it to make a decision safely. It’s also about understanding where the liability lies, when you’re using a technology like AI to support your own decision-making, so that you have that protection from a liability perspective.

Finally, there’s user experience. Is it easy to use, does it work well within your existing pathways? If these factors are achieved, then you are going to feel more confident with the technology.

Many thanks to Hatim for taking the time to share his thoughts.