Our latest HTN Now webinar, supported by Restore Information Management, focused on the practicalities of AI technologies, exploring topics including implementation, adoption, the role of data, policy, regulation, evaluation and best practices. With the help of our expert panellists, we also took a closer look at examples of AI in health and care.
Panellists included Neill Crump, digital strategy director at The Dudley Group NHS Foundation Trust; Lee Rickles, CIO, director and deputy SIRO at Humber Teaching NHS Foundation Trust; and Beatrix Fletcher, senior programme manager (AI) at Guy’s and St Thomas’ NHS Foundation Trust (GSTT).
Beatrix started off the introductions, sharing some details about her background in nursing and midwifery, her work at GSTT in “a small team which sits within medical physics”, and her role running fellowships in clinical AI focused on educating the workforce about the use of AI on the frontline, as well as working with the CSC team to evaluate and deploy AI products at the trust. “We mostly use AI right now for medical imaging,” she said, “but we also use decision support tools in triaging and prioritising, including in the emergency department, and we have different modalities in large language models (LLMs) for auditing of reports post-surgery to predict elements of an intervention which may have had a certain outcome.”
Neill talked about his 25 years’ worth of experience working in the public and third sectors, as well as his role as digital strategy director at The Dudley Group, with a remit including design of the trust’s digital plan and architecture, “all the way through to cyber, the analytics team, and cross-portfolio working”. Giving an example of where AI has already made an impact at the trust, he said: “Our focus has been on stratification tools, using our informatics team to work with suppliers on prioritisation, such as improving information from diagnostics on the stroke pathway, through to more focus on elective recovery and outpatient waiting lists.” Migrating the trust’s analytics platform to the cloud has enabled the use of natural language processing tools, he continued, “where we’re focused on things like patient and staff experience feedback”. And in terms of cyber, “we’ve been applying that to detection response and threat protection, which has given us a great understanding about using reinforcement learning to train the tools based on the alerts we receive”, he said.
Auto-coding is another use case from Dudley, Neill went on, “and that’s making sure we can have a depth of coding for acute and planned care admissions, so we understand our activity and bring in the right income for the trust”. And finally, Neill shared some examples of opportunities the trust is exploring around the implementation of AI in other areas, including the use of Ambient AI to release time to care, enabling transcription to capture conversations between patients and clinicians, which is automatically inputted into the system. “We’re going to be deploying that across our same-day emergency care,” he said, “and perhaps other areas such as rheumatology”. That factors into the wider move to support outpatient digitisation, he considered, along with improvements to efficiencies for staff and experiences for patients; bed allocation; and other operational improvements.
From Lee’s perspective as CIO, director and deputy SIRO at Humber Teaching NHS Foundation Trust; and director at Interweave, where there’s been a focus on developing Shared Care Records and harnessing LLMs to aggregate data; Lee talked about some of the challenges around data and information sharing. He also told us about the ICS’s community of practice, “which is all about sharing ideas and helping each other”, and how Humber Teaching had won an award for using AI in waiting list reduction. “We’re looking at where we’re going to go with Copilot,” he added. AI is definitely something the trust’s leadership is “very keen on”, he said, “and the AI group reports directly to the executive team – we’re always looking for opportunities to improve quality and efficiency across the board.”
Overcoming barriers to AI implementation in practice
Moving on to discuss some of the barriers existing to AI implementation and how to overcome those, Neill talked about how The Dudley Group is establishing itself as a “learning lab”, which he describes as a “learning journey” to ensure things are in place such as a problem statement, key challenges, and approaches. He also shared a graphic (see Figure 1), showcasing the trust’s work on this to date. “That looks at the people we’ll be collaborating with, the infrastructure needed, and then some of the outputs that we’re looking for in terms of relationships, engagement, key areas of focus where we’re planning to deploy AI, and then some success measures, to help us work out where we’re headed and record that.”
![](https://i0.wp.com/htn.co.uk/wp-content/uploads/2025/02/2.png?resize=810%2C461&ssl=1)
“What we’re saying is we’re going to use that to automate with AI and actually put that within the workflow,” he said, “because I think that’s going to be one of the critical things for clinicians, that they don’t have these separate AI solutions sitting outside of their existing workflow.” This, along with considerations for future uses of AI, is a key component of the trust’s new digital strategy for 2025-28, he told us, adding that mapping that against What Good Looks Like and the MITRE framework is helping to build an understanding around how AI is going to help support people, make practice safer, empower citizens, and improve care (see Figure 2). He also highlighted how that work had bolstered understandings of potential barriers, like clinical safety standards.
Neill continued on to talk about the barriers in terms of the modern, secure data infrastructure operating environment, noting the organisation’s recent move to the cloud. “We think a lot of AI will be software as a service in the future,” he considered, “so I think you need to make that secure and available, as well as to make sure you have those metrics in place, to help you demonstrate how you’ve overcome those barriers.”
Beatrix shared some learnings from a project management perspective, echoing some of Lee’s earlier points about the importance of having available and accessible data. “If you don’t have the data, you can’t use this technology,” she said, “and the same goes for data which is siloed or unstructured – from a very ground-up level you need to be thinking about where you’re wanting to go, how your systems are getting in your way, before moving to what you can potentially build on top of it.” Further issues can arise where people go outside of their organisation in search of structured datasets, she considered, “which may not be representative of your population, so straight away your performance is going to be poorer”.
From an organisational perspective, whilst Beatrix notes that there is “a lot of conversation about wanting this”, she observed that it’s somewhat limited. “We need to look on an organisational level at what our appetite actually is for risk, how soon we want this, what work we’re willing to put in toward an organisational and cultural shift; this is not something that a separate team can just decide about – everyone has to come on the journey.” Part of that is changing the ways we think about the tools we use in practice, she went on, “and with a lean approach, you don’t want to be automating or using AI for a singular part and then ignoring the downstream impact”. This is especially true for tools that offer prioritisation, she said, “because we often find that in, say, radiology, that may have some impact on speed, but the impact is seen where they’re now referring those patients to; we should ensure those departments are ready for their referrals to be increased by tenfold”.
Dealing with risk and regulation around AI implementation
Lee talked about the importance of recognising and understanding bias and how that might apply to your population, before moving on to discuss the necessity of pinpointing a challenge or specific area of quality that a tool might be used to improve. “The absence of understanding what classes as a medical device when it comes to suppliers makes it difficult for organisations to grasp what should be applied in terms of regulation,” Lee said, “and there can be further issues when suppliers talk to operational teams, promising all of these mystical savings, and then they get disappointed when you start asking what the evidence base is, how it meets certain regulations, and so on.” From a supplier point of view, “it does feel like a bit of a Wild West”, he continued, “but the biggest risk is probably around the clinician that’s playing with ChatGPT in shadow IT, using it for practice without being tested or checked”. A survey completed at Humber Teaching found “probably around 60 percent” of staff were already using that in shadow IT, he shared, “so it’s already in there – the issue is using it for clinical”.
“A lot of the reasons you’re flagging up are the exact reasons we introduced a fellowship in clinical AI,” Beatrix agreed, “because we want to empower clinicians with knowledge, so if they’re approached by a supplier, they have those questions straight away.” That kind of knowledge can also help manage things like risk and bias, she went on, as “the clinician is best placed to know the safety impact of a tool being used – the power should be in the hands of those who will be using it on a day-to-day basis”.
Neill noted that from a procurement standpoint, “what we’re looking for is that explainability around those algorithms, so we can see how the decision making takes place, because that gives clinicians the understanding, in a clinical context, of how that AI will be deployed”. Suppliers going and getting peer-reviewed is also important, he reasoned, “so there’s a clear evidence base around their solutions, because we’re finding that as you implement with clinicians, quite rightly, they’re going to question things and want that assurance”. Part of that is often wanting to speak to clinicians from other trusts to find out how a tool or solutions is working in practice, he said, to learn about any issues and outcomes it has offered to date. “Once the implementation is completed, you need a really robust process, working alongside the supplier, to make sure there aren’t any problems arising like false positives – that can happen in circumstances like where a US vendor has trained their algorithms on a different demographic which doesn’t fit with your own,” he concluded.
Assigning responsibilities and making sure everyone understands those can also be complex, Neill shared. “It’s ensuring chief executive officers know that they have a statutory responsibility as part of the Health and Social Care Act; that the medical director appoints a responsible person for AI; you’re going to have incidents that do arise, and you need someone to be responsible for that; at clinical director level, they need to know they’re responsible for the safe planning, deployment, and post-evaluation.” Training is also key, he said, “so people are aware that there may be limitations, and they’re not solely relying on that AI”. Getting documentation in place and preparing the evidence base helps to secure transparency and compliance, he told us, “whether that’s the training of the datasets, validation processes, risk assessments, compliance with regulations, or similar”. Help from the national team is also required, he went on, “because if we look at one of the most successful areas, which is stroke, in a period of six years the number of trusts using AI in that area has increased from about five percent to 100 percent, and a lot of that’s down to the national team and their work with suppliers, making sure documentation is done correctly, that it has the right sign-off. What we found, then, was that our teams were able to more easily adopt that solution, whether that be at the start of the process with the IT vetting and the information governance, or all the way through to the actual implementation and clinical engagement”.
“One of the struggles that we have is that we cannot accurately give you an answer relating to performance and risks,” Beatrix responded, “until we have trialled it on our patient datasets. We can do that retrospectively, and that’s what we do with all of our evaluations – assess their feasibility, their criticality, their potential impact on patient care both upstream and downstream, and whether that’s performing as the vendor says it does – but there’s this critical point of a prospective trial deploying the product into the workflow and running it either side-by-side or with a test team, and returning to evaluate that in three month’s time.” Often, it’s trying to convince the organisation that something is safe enough to deploy in that way so it can be tested, she said, “because understandably, in traditional deployments of products, we already know all of that beforehand, so that comes down to the cultural shift, and understanding we have to experiment – it’s hard, it’s expensive, and ultimately that tool staff get used to might disappear again whilst we build the business case following the trial, so it’s all of this clunkiness that we’re only going to get better at the more that we do it”.
Involvement from the national team is essential, Lee agreed, giving the example of Copilot. “Everyone is testing that at the moment, so we’ve probably got 40-50 different tests of the same product going on,” he said, “rather than actually NHS England stepping in and coordinating that, so we can all benefit from the learning and evidence base.” Unless organisations start coming together more, he shared, “we’ll be duplicating a lot of the same work, we won’t get those benefits, and it’s not the best use of NHS funds. We can’t afford for NHS England to always be five or six years too late, when everyone’s already done something.”
Audience questions: ethics, sustainability, managing paper records, and more
An audience question asked our panellists whether their organisations were factoring in any ethics or sustainability criteria when procuring AI solutions, “given the amount of energy that AI can consume”. Beatrix responded that ethics is part of the procurement decision and pops up at different stages along the pathway, but that the capability only extends so far, since “suppliers have a right to refuse to tell you beyond a certain point, because then they’re telling you how their product works”. And on sustainability, Beatrix noted that AI can “often be held to a higher level” due to its existence as a new technology, “so when you talk about how much energy a particular algorithm will use in compute power, we can give you that, but there’s not a very good understanding of how that compares to anything else used within the hospital”. Doing that work needs to be a group effort, she went on, “so we can be discussing where in the organisation things are falling and how to compare that to overall benefit”.
Ethics is also embedded within the overall framework and associated policies at The Dudley Group, according to Neill. “That’s at an ICS level, so we’re working with other providers and teams across Black Country ICB to allow us to do all of that faster, and to ensure we’re not all doing that separately. We’re leaning on national guidance, so we’re not doing that from scratch, and we’ve taken bits from that where that makes sense.” The trust hasn’t yet fully explored sustainability around AI, however, with Neill stating: “That’s an area we’re still to explore. Although we have a clear Green Plan, we’ve not embedded AI in that yet.” He also talked about DeepSeek, a new AI-powered chatbot similar to ChatGPT, voicing his interest in the Open Source nature of that solution, and the possibilities that offers to “run that infrastructure yourself”. The amount of compute power that uses in comparison to other versions of AI is also lower, he observed, “so if that gives us an insight into where the industry is heading, I think that will help us all in terms of thinking from a sustainability perspective”.
“Sustainability is interesting, as what we’re really talking about is cloud computing,” Lee considered, “which years ago was seen as the green option”. The issue with cloud, however, is that “you can use far more capacity and capability, far faster, than you can with on-premise”, he added, “so we’ve burnt through more using the cloud simply because you can do things faster, and people do want those things faster, which isn’t necessarily a green option”. AI and the green agenda are going to be “difficult to square off”, he predicted, “because those two things are just not compatible – as soon as you put AI in, and future versions of that, that’s mean we’re using more and more”.
Our panellists also considered an incoming question around using AI within organisations still relying heavily on paper records, or handling large amounts of unstructured data. “If you’ve got poor data, you can’t expect AI to solve all of your problems,” Lee responded, “and there are many challenges around trying to convert paper records, trying to code those in some way, which make this problematic.” He also noted other potential areas for AI to be used, giving the example of Humber Teaching’s use of AI as a medical device for paediatric AI patients. “That monitors them and their behaviours, helping to inform their eventual diagnosis and then follow-on plans,” he shared, “so not everything with AI is always around the data in your record.”
“I would echo that,” said Beatrix, “because it’s a difficult one to gather – not only because you don’t have that moment in time recorded, but also because any type of AI you take forward can’t extrapolate things that were happening at that moment in time. We also find a lot of form filling is done in a way to just get to the next stage, so I wouldn’t say electronic records are always that much better – you find forms where boxes have been filled in with dots just to get past that stage.” In general, she continued, we shouldn’t be moving away from a patient-owned record, “and in that case we just need a digital version of that, rather than digitising someone’s paper record, because that’s still not the collaboration we want to achieve, and that’s not empowering patients with their own data and their own care”.
“It just reinforces how important data is to AI,” said Neill, “and also that we still have historical problems in terms of physical records, which may not be solved, but then again things like AI optical character recognition could maybe be helpful there.” Getting that record digitised would be valuable for research purposes, as well as to help offer that patient ownership of data mentioned by Beatrix, he continued, “because that will likely increase the quality of data over time as well”.
Neill also offered some final thoughts around clinical safety for AI products, sharing with us that The Dudley Group has “a very clear clinical safety process”, as well as a clinical safety officer. “We do that as a process which starts from procurement and leads all the way through to implementation,” he said, “and it’s about understanding the potential hazards and harms, coming up with those mitigating actions we can take, and looking to post-evaluation to identify and tackle problems that arise over time.” He suggested creating a register of AI assets and taking a methodical approach to reviewing those regularly over time. “That’s why, in this day and age, you can’t operate without a clinical safety officer and all of the training that they go through – working at ICB level has been helpful because you then have the different skillsets within the clinical safety domain coming through. Rather than having a single point of failure, your clinical safety officer should also be training other users around the organisation to enable you to go at pace through a variety of different technologies.”
Beatrix told us how her team have started the process of developing their own service queue within the IT service portal, to allow better visibility of issues and incidents relating to AI products. “We have all the information about the product, how it works, and the safety documentation on the page,” she said, “so that’s searchable to help support the IT team if those problems arise.”
To add a supplier perspective, Stefan Chetty, Digital Services Director, Restore Information Management, commented: “When it comes to realising its benefits in regard to health records, we must take some critical first steps. Many trusts still use physical records and therefore without pointing out the obvious, AI is not possible without a change to this. Once decided on what to digitise, the NHS will need to shift clinician behaviours towards digital-first workflows, and ensuring the integrity of the data itself.” He added: “Even in trusts that have already adopted digital patient records, the accuracy and quality of data remain crucial. Without structured, reliable information, AI cannot deliver on its promise to improve patient outcomes in this area. Equally important is ensuring that data is stored and processed securely, using the right platforms that comply with healthcare regulations and safeguard patient confidentiality.”
We’d like to thank our panellists for taking the time to share their insights with us on this topic.