The CQC has issued guidance on the use of AI in GP services, sharing what it looks at when assessing safety and compliance across areas including procurement, governance, human oversight, learning from errors, data protection, and staff training.
Assessors will check AI tools have been procured in line with relevant evidence and regulatory standards such as DCB0160 and DTAC, also reviewing clinical governance arrangements to check appropriate and safe use. “If an AI tool is supplied from an NHS procurement list it is reasonable for the practice to assume that appropriate developer standards have been met,” the CQC states, “but there is still the need to ensure that it is deployed in line with its intended purpose and evidence of regulatory standards reviewed.”
There should be a responsible CSO and digital lead for AI technologies and “related clinical governance”, the guidance continues, who has completed relevant training in this regard. Evidence of human oversight, ongoing monitoring and evaluation of AI outputs and processes will also be looked at, using organisations’ audits, significant incident log, or other quality improvement activity. “You need to demonstrate that AI is being used as a support tool – not a replacement for human oversight,” it adds.
Practices should have a hazard log and risk assessments for AI tools, as well as systems in place to report and investigate when things go wrong. The CQC will also look for evidence of learning from errors, checking for “established systems to report and investigate occurrences”, such as reporting to software developers or the MHRA’s Yellow Card system. “Any lessons learned should be shared internally,” it continues, “for example in staff meetings and with the CSO, and shared externally to digital leads in the integrated care system or through the Learn From Patient Safety Events (LFPSE) service if patient safety is at risk.”
When it comes to consent, the CQC maintains that practices should make patients aware that AI technologies are being used, allowing them to object, and that for tech like AI scribes explicit consent is not required, as “it is appropriate to rely on implied consent under the common law duty of confidentiality”. It points to ICO guidance on ensuring lawfulness in AI. Assurance should also be sought on the risk of bias, it states, whilst practices should take steps to reduce digital exclusion, and staff should receive training to be competent in the use of AI tools.
Finally, the guidance shares that practices should be able to demonstrate how their third-party vendors have ensured compliance with data protection standards, through the Data Security and Protection Toolkit (DSPT), appropriate cybersecurity arrangements, Data Protection Impact Assessments (DPIAs), and GDPR.
AI in patient care
Somerset NHS Foundation Trust has shared a series of communications to explain to patients how the trust is using technologies such as AI, ambient voice, virtual nursing, and generative AI. Andy Mayne, chief scientist for data, operational research and AI, shared on LinkedIn that the purpose is to educate patients on the use of these technologies to enhance care, as well as to “be transparent about where we use it”. Andy also shared the trust’s AI policy late last year, which spoke of a commitment to ensuring transparency, along with safety, equality, inclusion, and thorough testing.
Earlier this year, a HTN Now webinar focused on the practicalities of AI technologies, exploring topics including implementation, adoption, the role of data, policy, regulation, evaluation and best practices. With the help of our expert panellists, we also took a closer look at examples of AI in health and care. Panellists included Neill Crump, digital strategy director at The Dudley Group NHS Foundation Trust; Lee Rickles, CIO, director and deputy SIRO at Humber Teaching NHS Foundation Trust; and Beatrix Fletcher, senior programme manager (AI) at Guy’s and St Thomas’ NHS Foundation Trust.
Bedfordshire, Luton and Milton Keynes ICB have received additional funding from the Digitising Social Care Programme’s Adult Social Care Technology Fund, allowing them to continue offering support through the AI-driven pain assessment app, PainChek®. When installed on a smartphone or tablet, PainChek® is capable of recognising facial expressions, using AI to “identify and quantify pain levels”. It’s currently being used to support patients who have communication difficulties, dementia or learning difficulties, gathering information which can then be used to create a data profile and help with medication prescribing.
NHS Greater Glasgow and Clyde, NHS Lothian and AI evaluation company Aival have begun testing the technical performance of AI tools as part of a £1 million project looking at how well AI integrates with existing clinical systems and workflows. Funded by Innovate UK, the project aims to assess the safety and effectiveness of AI technology, creating a validation framework that will support assessments of these tools prior to procurement and help develop “less invasive and more cost-effective options”.
A new approach has been created utilising AI-simulated and demographically representative patient and public involvement and engagement panels, designed to enhance public voices and overcome challenges around recruitment, geography, and inclusiveness. Developed by an NHS Golden Jubilee volunteer, Andrew Steele, the approach incorporates AI large language models and UK Census data to generate what Steele refers to as “demographically representative virtual panels, enriched with specific lived experiences and health conditions”.