For our latest HTN Now session, we were joined by experts from across the health and care sector, including Dr Shanker Vijayadeva, GP, NHS England (London region); Rhod Joyce, deputy director of digital transformation, NHS England; Dr Dom Pimenta M.D., co-founder & chief executive officer, Tortus AI; Dr Hannah Allen, chief medical officer, Heidi Health; and Dr Andrew Whiteley, managing director, Lexacom.
Our panellists discussed some of the practicalities and key considerations to take into account when it comes to using ambient scribe technology in primary care and general practice. This meant delving into the risks, evidence, compliance, and how to move forward.
Unexpected challenges with deployment
The panel began by tackling the “unobvious challenges” in this space. Shanker was the first of our panellists to tackle this question: “To start with, this is a product that GPs have often already heard about and used. They’ve played with it and started to recognise how it can change their practice, which is a good marker for a real product that is actually changing practice. However, I think we’ve got to understand that the average practice does not understand all the steps that get thrown their way, particularly when it comes to NHSE guidance.”
Expanding on this, Shanker highlighted some of the confusing elements that are involved with implementation, particularly when it comes to guidelines, noting how GPs often have to stop and wonder what their ICB thinks, what a DCB0160 is, or what exactly they’re supposed to do with the tech. “And that can then slow them down,” Shanker said. “That is probably the biggest challenge. But there are quite a few different products on the marketplace too, so knowing how to distinguish between them is a challenge in itself.”
Rhod echoed much of what Shanker had to say, adding that ambient scribe technology is a “novel technology, but still just a technology”, which he noted has “huge potential benefits across primary and other care settings”. He also highlighted challenges around safety and assurance, claiming: “It’s one of the most important things. And understanding what the technology does can be tricky. So, we need to think about what that assurance model needs to look like and doing it in a way that is rapid, while making sure we are deploying safe, well-evidenced technologies.”
For Hannah, understanding safety and governance have also been key challenges that she’s seen as part of deployment, but she noted challenges around change management as well: “Even just simple things like knowing what to do with my hands now that I’m no longer using them for typing. Really simple examples of things that you don’t necessarily anticipate when you’re deploying this type of technology. Yes, the educational challenges are there, but also on the ground, there are issues around change management as well.”
Building on this point, Dom shared: “I remember using it for the first time and I suddenly realised that I struggled with looking the patient in the eye because usually I’d be typing. When we set up the first clinic without a keyboard, we didn’t know where to put the chairs, so it’s not always the tech, it’s also the physicality of being in a room with someone.”
Andrew spoke about the ever-evolving nature of the NHS, stating: “I think the NHS has evolved slowly over the past 60 years and the government changes its mind every 10 years about how it should be run. And primary care particularly adapts well to those changes.” On the implementation of AI in particular, he noted: “It has changed the way that doctors can actually function; it will save them so much time and provide so much opportunity for accuracy and better quality notes. So I think it’s something that we need to make sure that we’re explaining clearly to our customers.”
He went on to highlight how important regulation and clear communication are for this type of tech, in order to tackle challenges around trust: “Making sure that it’s done in a safe way, that the data is looked after and protected and held in a safe location is important. And there are also challenges around making sure that people actually understand the technology and that we’re completely open with them about what it can do and how it’s going to change rapidly.”
Considerations around safe adoption
When looking at safe adoption and making sure GPs aren’t taking too much of the burden, Rhod emphasised the importance of looking at how support is given to regional and existing ICB colleagues when participating in the deployment. “It has to be done through a partnership and co-designed with the team,” he said. “We know that to deploy these things safely, you need to understand what a DPIA is or what a DCB is and how to fill those documents in. I think if we want to set some guardrails around what is required to deploy use cases safely, it has to be done as a cooperative between the centre, the system, and the suppliers.”
Reflecting on the urgency surrounding adoption, he went on to say: “We know people are desperate to adopt now. By working closely with partners and suppliers to determine what validation looks like, it will help narrow the field for a lot of the people who are looking at commissioning, whether they’re commissioning at a GP level or whether they’re being supported at an ICB or even regional level.”
Finally, Rhod suggested that it was important to evidence the safe and practical usage of AI across various sectors, through dedicated use cases: “How we look to embed AI capability is absolutely critical, because then we can evidence things and evaluate things based on those use cases. We shouldn’t be mixing a primary care use case with a secondary care or an A&E use case, because we want to be able to build that evidence nationally to say this is good, this is safe, this is impactful and to be clear on where exactly it has been impactful, because it may be different for primary care vs A&E.”
Shanker highlighted how the growth in AI ambient technology has actually helped to increase awareness around important compliance and safety standards such as the DCB0160, stating: “I think it has drawn attention to it as something that we really should have been doing from the beginning”. Emphasising this, he mentioned: “At one point, no one knew what a clinical safety officer was or who it should be. And now when you do look at who should be responsible for the DCB0160, you’re faced with capacity issues.”
Shanker went on to explain how this can cause problems: “There are limitations to what an external person can do in terms of final sign-off. They can support you, lead you, draft documents, etc., but the final review will most likely need to be at PCN level, which is why we need that clinical safety officer role. We shouldn’t just think about it in terms of ambient scribes, it needs to be a part of everything you’re doing as a PCN.”
Dom added that one of the other key things people may not appreciate about ensuring compliance and safety, is the fact that “if you’ve done it once, that’s not the end. You have to do it every three to six months because the AI technology itself will move, it will update and there’ll be new features.” He noted how these updates can “fundamentally change your original safety case” causing you to create a new one, “and we don’t really have that type of cadence built into the system”. He considered potential ways around this, either on a national level or through local forums, meetings and workshops, which he said can be “very hard to maintain”.
Andrew agreed with Dom before sharing his thoughts on regulation: “I think digital dictation and other more straightforward technologies have had very little regulation about what could be installed. As long as you showed that the data was encrypted and stored safely within the UK, I think people were happy.” However, he noted that newer technology such as AI scribes have so much more capability, which is why “we need to make sure that the providers are controlling it and making sure it’s doing exactly what is required at the time. No more, no less.”
On how general practice can safely bring this tech in, he suggested “some type of reference book” which outlines exactly what practices need to do and what they need to check, as well as a list of companies that are already known to comply with all the safety regulations. “There are so many things which are so easy to install, so accessible, that I think we need some way of actually saying which ones are OK and which ones are not OK.”
Andrew also emphasised the importance of making sure patients understand elements around safety: “Everyone is concentrating on making sure that the doctors understand what they’re taking on, but I think the patients need to be brought along with them as well. They’re the ones who have access to the NHS App where they can see the consultation, and they’re the ones who are actually using the technology to view the AI notes.”
For Hannah, “safety, compliance and governance should be part of the DNA of any practice, not just a tick box exercise”. She said that because AI technology evolves so quickly, “it needs to become a core element of deployment, so that we can really understand how it’s evolving and what that means on the front line. It’s really important that we spell that out for people because otherwise it just feels completely overwhelming.” Hannah also emphasised the importance of educating people in the right way “so that they know what they need to do, how they need to do it, and how to access support”.
Increasing accuracy and reducing hallucinations
One of the challenges of using AI in primary care comes from hallucinations, which Dom explained as being “clinical entities that are in your notes that were not said in the transcript or any other audio or data inputs”.
Sharing his thoughts on how to reduce these hallucinations and increase the accuracy of the AI scribe, Andrew began by explaining: “Using these technologies is great, but a YouGov survey we sponsored shows that 74 percent of general practice patients do not trust big companies.” He added that even though AI should not create anything that hasn’t been said between the patient and the doctor, “it has been known to happen occasionally”.
As an answer to this, Andrew shared details on Lexacom’s Patient Shield tool, which he explained “redacts all the data before it goes anywhere near AI, so that all the names, telephone numbers, places of work, addresses, etc., are removed before AI gets involved.” He went on to note how “critically important” this is in primary care and in medicine in general: “We don’t want someone making up something that hasn’t been said. At the moment, while we’re just class one, we’ve created a product that will compare what was said originally to what the AI has returned, and perform a post-AI assessment of the information to make sure there’s nothing that is generated that hasn’t been already mentioned. That means we should see hallucinations drop to near zero and I think that’s going to be a big step forward.”
Hannah emphasised the point she made earlier around integrating safety and governance as part of company culture, to encourage a more proactive approach to improving the accuracy and performance of AI tools. She noted the importance of asking key questions such as: “How do we improve things? What’s our internal governance feedback loop like? What does post-market surveillance look like? How do we work with partners on that?” She also referenced the need to monitor and measure hallucinations, emissions, word error rates “and all of those things that we all know from working in the industry, are super important to be proactively assessing”. Hannah noted education as key, too, adding that there should be focus on “educating people about those risks and being really transparent and open about them”.
Post-market surveillance
On the topic of post-market surveillance, Hannah insisted that it’s “absolutely mandatory” before adding that “co-designing this with partners to really understand what we need to be feeding back into the system and how we should be learning from this on both sides, is super important”. She explained that from her point of view, “it’s not something that we take lightly, and analysing all of the feedback for any significant events while working collectively together can help us stay at the forefront of all of this”.
Rhod agreed that ongoing analysis is an essential part of post-market surveillance before stating, “Sometimes we think that because there’s a human in the loop when using this technology, the risk is automatically low”. However, Rhod explained how this can lead to complacency, which he why “we need to make sure there are audits in place to actually help support the clinicians and not blame them for not checking enough or not validating enough. We need to make sure that there’s real confidence in the GP community to be able to say, I understand the technology and I understand what it’s doing.”
The scale of the challenge
“I think it is an evolving science,” Dom said, as he reflected on what the panellists had discussed. He then went on to highlight some key figures which represented the scale at which they were working with. “There are five million interactions in the NHS every day across every single setting,” he said, explaining that an interaction is made up of a patient/clinician conversation. “But there’s a one percent error rate in that setting, which is 50,000 errors a day. And even if you go all the way down to a .001 percent error, we’re still talking about 5,500 errors every single day.”
He also outlined the key problem with digital integration: “It’s all about provenance. We don’t currently have systems to track where text came from because prior to this it came from humans and once it’s in the system, it’s done, that’s in your record forever and we have no way of untying it. So, I think we can’t underestimate what happens now at scale and that’s why I think we’ll continuously push this space for quite some time.”
Preparing patients for change
Our panel then took some questions from the audience, with the first one focusing on national efforts to help prepare patients for changes around ambient scribe technology. Rhod answered: “We do have some governance layers where the patient voice will be represented. But I think what we really need to do first is engage more with the system to really understand what it takes to deploy, what patient consent will look like and what the data will look like.”
Recent patient feedback
Shanker spoke about the latest patient feedback he’s received. “I think when the patients are in that room with you, they’re concentrating. They want to get the most out of the consultation. Anything that’s giving them more time and more focus to do that, there’s a cognitive bias to do it. I don’t generally get detailed questions about where the data is recorded. It’s more like why are you bothering to ask me? Let’s get on with it.” He considered what this process might look like in the future: “Should you put some notice outside? Does it become business as usual in the future where we don’t have to ask at all?”
Shanker also emphasised the importance of the consultation coming first: “If a patient has got queries about the technology, the best thing you can probably do is give them the details and park it for the next consultation, so you can get on with the current consultation and they don’t feel like they have to make a quick decision.” Overall, he noted that “the acceptance rate is really high. Patients will probably become very familiar with it in a couple of years time, understanding that the notes are all saved and they’ve got default access through the NHS App.”
Adding to this, Hannah very much echoed what Shanker had to say, before noting a responsibility to work with practices and regions to figure out what assets need to be produced to make sure patients fully understand the technology being used. “Is it a summary after the consultation explaining what the tool is and how it works? Maybe it’s showing them the tool,” she said, before explaining that the most important aspect is making sure the patient feels like they’re part of the conversation in some way.
Moving forward with AI ambient scribe technology in primary care
As a final point, Dom looked at the short-term plans for helping primary care move forward with this type of technology: “Obviously we’ve got huge bottlenecks to solve. But there’s also mass education around the tools and the capabilities involved.”
He ended by saying: “In the future, what we can offer to GPs in terms of capabilities and administrative automation is going to be massive, way bigger than just scribing. And I think if that can help to restore the humanity and love for this type of work, that’s where we’ll start to see the real benefits. There’s a lot to consider around safe adoption and measuring that benefit over the long-term as well. However, I do genuinely believe that the happier the workforce in healthcare, the safer the patients, so if we aim for mass happiness, we’ll unlock benefits for everybody.”
In May, TORTUS AI, whose Ambient Voice Technology (AVT) was described as a ‘game changer’ by Health Secretary Wes Streeting, confirmed a strategic partnership with X-on Health, the largest primary care telephony provider in the UK, serving over 3,500 GP surgeries.
Surgery Intellect, powered by TORTUS, is a voice-enabled AI assistant that uses ambient voice technology (AVT) to listen, transcribe and code consultations in real time. Find out more here.