HTN was joined by Neill Crump, digital strategy director at The Dudley Group NHS Foundation Trust, and Lee Rickles, CIO at Humber Teaching NHS Foundation Trust, to discuss practical steps health and care organisations can take to prepare for AI. Neill and Lee shared details of their current work and their journey to date, best practices, learnings, challenges, and the opportunities that lie ahead.
Lee offered an introduction to his role as CIO, as well as director for Interweave, a platform that enables health and social care staff to access real-time health and care information across different providers and between IT systems. With Interweave, the focus has been on providing shared care information and other technologies to seven ICSs; with Humber Teaching, a recent EPR move to SystmOne has taken centre stage and allowed the trust to refocus on where it would like to get to with its technology.
For AI, Interweave adopts a user-centred design approach looking to solve problems rather than simply building tech, Lee stated. “It’s NHS-owned, and uses open standards, open source technologies, keeping costs down in a lot of ways for partners.” One of Interweave’s challenges has been presenting information to different practitioners in ways that work for them without creating a complex access control system, he said. “We’re now in the third stage of our proof-of-concept work with Google and Deloitte, using the Google AI infrastructure and taking a federated approach to ensure standards are in place, and work has been progressing on looking at how we could get information pulled out from records using large language models, and presented using cognitive AI.”
As far as Humber Teaching, the board has started looking at getting the governance in place and introducing tools such as AI triage for neurodiversity to try and promote early intervention to prevent impact on the system further down the line. While the trust does use Copilot, it hasn’t completed a benefits case, and is focusing on implementing an AI scribe solution.
Neill highlighted that The Dudley Group is forming a Group now with Sandwell and West Birmingham, noting that he will be moving to a new role focusing on innovation and partnerships, working with partners including research organisations and universities to explore opportunities for AI. He shared work he has been doing over the last few months on adapting the MITRE framework to fit with each of the What Good Looks Like framework’s domains.
“We’ve also been deploying AI in a number of spaces, including Ambient AI, and in diagnostics where we’ve mainly been looking at stroke and thrombectomy. It’s also being used in clinical decision support in terms of referrals and giving clinicians a better way of making those referrals, and in cyber and risk stratification,” Neill told us. Another project is setting up a research lab for machine learning, which is currently in progress.
Key insights and focuses
At the HTN Live event back in November, Neill hosted a series of roundtable discussions on AI, and gave us some insights on key themes. Some of the main ways NHS organisations were planning to introduce AI included in analytics and machine learning, clinical decision support, Ambient AI, patient engagement, diagnostics, cyber resilience, and more, he shared. “It looked like a lot of the focus was being driven by local champions, rather than a whole organisational approach. I think there are pockets of good practice happening, and the next job for organisations is to understand what is happening out there.”
It’s important to go out on the shop floor and speak to different teams to find out what they are using and how, Neill said. “If you haven’t yet started rolling out Ambient AI, I can assure you that people will already be using it themselves, and using their own version of it rather than one that has been approved as meeting standards,” he continued.
Lee noted that “it’s not just Ambient AI where the wider workforce are using AI all over the place; it’s really hard to block if you don’t want it to be used”. The best approach is to get the people from those pockets where AI is being used, and bring them together in order to get the best out of all of them, he considered, “and we’ve got an AI oversight group that is a group of keen people who have an understanding and who represent the organisation as a whole”.
For Humber Teaching because of the medical device aspect of AI in clinical settings, it has meant the trust has focused on corporate opportunities, he stated, although an AI triage tool has been introduced focused on helping to triage mainly young people referred to services. “The AI engine is based on a deep learning and large language model, and makes recommendations on whether individuals require our services or would be better suited to services elsewhere,” he explained. The tool can also perform a level of care planning and management of the tooling itself.
That is representative of the direction of travel, particularly from a mental health point of view, and the trust is looking to do a similar thing for older adults and front door services, Lee said. “It’s just the time and the money – that’s one of the challenges, getting the finances to stack up to the quality, to stack up to the product.”
Neill discussed a collaborative project The Dudley Group is working on with the Midlands regional team to do a wider scale procurement for all trusts on Ambient AI. “We started where the pain was, so in our organisation same day emergency care is really busy, and outpatients – we had a lot of different problems like backlog issues with documentation, which were creating clinical risk, and staff fatigue due to long hours,” he noted. “We had to get clear baseline metrics, and we created a case study based on that. We also had to do a clinically-led workflow deployment, embedding it directly into real consultations, and ensuring clinicians retained full accountability for review and sign-off.”
A phased rollout with local ownership and clinical super users across each of the services followed, with iterative refinement based on feedback, starting small and expanding as confidence grew. Getting the governance built in from day one was integral, Neill continued, “and we made sure the solution was restricted to approved corporate managed devices, and that the trust standards were adhered to in terms of device use”. Treating AI output as a draft, rather than a final record, is going to be increasingly key moving forward to maintain clinical input and professional judgement. Findings have shown a 71 percent reduction in documentation time in same day emergency care, from over eight minutes to two and a half. In rheumatology the letter backlog has been reduced from 3,000 to less than 200, with “major impact on risk score and a positive impact on patient journeys”, he added.
Reflecting on what might be done differently if Dudley were to start the project again, Neill highlighted the significance of having a trust-wide narrative and making sure people understand the solutions and what they are used for. “I think making sure the “why” question is answered earlier, and having strong baseline communications around it is important,” he stated. “Patients are generally pretty accepting of the fact that it’s going to lead to better outcomes.”
“We haven’t taken the view that we necessarily need to ask for consent,” Lee told us, “because if we look at using AI as a medical device, we don’t get consent for heart monitors or CT scanners. There needs to be a level of pragmatism involved, making people aware and keeping them informed on how we are using these technologies – my personal view, not the trust view, is that introducing the need to ask for consent will create an added layer of complication, meaning clinicians need to figure out whether it can be used for that particular patient, taking away time to care.” The key thing is acting on any issues that arise, and making sure patient preferences on sharing data are taken into account.
Data quality, environmental impact and sustainability
There are lots of questions being asked around the environmental impact of AI, but Neill suggested not seeing it as any different to the rest of the digital portfolio. “We’ve always had that conversation, and we’ve got a green sustainability function that works alongside all our different programmes. It’s making sure we know that AI is a carbon cost, measuring that, and looking at different ways of deploying to do it in the most efficient way.” One of these types of decisions has been whether to run large language models in the cloud rather than on premise, according to Neill. “We run them in the cloud, where we know there might be a big spike but we need that compute, and in other cases we’ll run it on-prem because we can do it really efficiently. It’s about linking the sustainability to the productivity – if you can be more productive then that links to the fact you’ve done the right thing.”
Lee referred to some aspects of AI not yet having been completely tested in a healthcare environment. “It’s still early days, and I think there will be bigger problems down the line when quantum computing comes in, because that will burn energy and AI resource to a degree we’re not seeing at the moment. It’s still a concern, because we are burning energy at a far higher rate than we have previously, and there will no doubt be consequences.”
Lee and Neill next discussed the topic of data quality, with Lee emphasising how central the quality of data is to how effective large language models can be. “If you’re very reliant on paper, or if your data isn’t in real time, your ability to sweat the use of AI gets more difficult, more inconsistent, and more risky,” Lee shared. “You need really good quality data layers for AI and robotic process automation – if you’re not already recording or measuring it, don’t expect AI to solve your problem, because if your data is rubbish, AI is probably going to create more nightmares.”
“That’s exactly one of the problems we’ve had in clinical coding,” Neill agreed. “Our team uses an encoder product, and they’re encountering those kinds of issues in that the data quality isn’t where it needs to be. Clinicians aren’t there to code data correctly; they’re there to make sure the patient is safe, but that can have a knock-on effect on the quality of the data we’re producing.” The team is now looking to embed the clinical coding within the Ambient AI. “One of the key elements to clinical coding is recognising comorbidities. The clinician will sum that up verbally during the consultation, and the Ambient AI will capture that at source, meaning the clinician doesn’t need to type all of that in again – the data quality and coding is done upfront. For my future role that’s exciting, as the quality of data for research is going to be vastly enhanced.”
We’d like to thank Neill and Lee for joining us to share their insights on this topic.




