A framework for implementing and monitoring AI in the London health and care system has been published, covering five key areas: partnership, infrastructure and data, use cases, AI delivery approach, and communication and workforce development. It spans governance, roles, responsibilities, delivery lifecycles, proof of concepts, pilots, data pipelines, business as usual, scaling, through to monitoring.
It has been developed in collaboration with digital leaders from South East London NHS, AI experts, the AI Centre for Value Based Healthcare, and the Health Innovation Network. It represents, according to an executive summary, “an overview of the agreed way of implementing and monitoring artificial intelligence products in the London health and care system”.
Pointing to challenges including a “lack of clarity” on how to practically implement AI safely and efficiently, and a lack of access to expertise around AI; the framework’s intent is to present a collaborative methodology whereby “projects follow a level of consistency” to promote scalability, and where providers “share information about their plans and outcomes”.
Identifying an “AI Coordination and Advisory Lead (AI Lead)” for each ICS is “critical” to the delivery of the framework, it states, due to their role in collating information on the progress of AI implementations across the system and in offering expert advice in the planning phases, “focusing on the mitigation of risks”.
On partnerships, the framework focuses on shared learning and acting “in the best interests of the system as a whole”, assuring processes implemented around AI will work system-wide, agreeing which partners will take the lead on various parts of the project pathway, committing to ongoing monitoring and sharing progress updates, and agreeing to “consolidating some expertise” to be made available to all system partners. “There may be value in working in collaboration with vendors,” it states, “it is important to consider how we ensure that the product is then made available to other providers in our system in a cost-effective way”.
When it comes to governance, the framework proposes that each ICS should form an “AI Advisory and Coordination Group” responsible for supporting the review of projects and the sharing of information across the system. It then moves on to outline roles and responsibilities across different levels, with health and care providers expected to govern and conduct AI projects, share information, and consider the potential for cross-organisational scale.
Moving on to consider AI infrastructure and data, the framework states that “the most common approach” to purchasing an AI product from a vendor is through a vendor-hosted AI model, or an off-the-shelf product with limited customisability, transferability, and opportunities for monitoring. Training AI models, on the other hand, are “substantially more complex”, it continues, requiring reusable infrastructure not dependent on vendors, high-performance compute, high quality data, and “a skilled machine learning engineer workforce”. The full AI lifecycle, it states, “requires model hubs for model management, bias detection and dataset drift”.
From a data perspective, high quality data is integral to successful implementation of AI, according to the framework, which warns that any data quality issues could affect the performance or impact of an AI implementation. For information governance, it also recommends that the implementing organisation be responsible for carrying out a Data Protection Impact Assessment (DPIA) or a data sharing agreement, suggesting that “pooling expertise to make decisions may be valuable to support IG experts make what are sometimes difficult decisions about the balance of risk against benefits to patient care”.
As well as outlining potential use cases such as in HR and training, staff rostering, smart automation and medical imaging; the framework sets out five questions which should be used as a baseline for assessing AI readiness. This includes: start with the problem and consider whether the AI product is the best answer to that problem, or whether you are “just listening to vendor hype”; how complex is the AI model and what challenges this could pose; how much integration is required with clinical or administrative systems; how “readily available” the data and infrastructure required are; and whether the product is already in use in the NHS.
When it comes to the delivery approach to AI products, the framework suggests four implementation phases: the idea pipeline, proof of concept, pilot, and business at usual and scale. For proof of concept, it states: “The intention is for this to be a fail fast and fail early stage, which is as simple as possible. Where possible, it is suggested that using test data (anonymised), and not implementing full integrations with other systems, but ensuring that the standards are met that would allow some integration to be implemented during a pilot.” A key step in this process is identifying a clinical safety lead, it continues, also considering privacy, ethics, and explainability.
For the pilot phase, it’s important to avoid a common challenge around a lack of “clearly defined end points and benefits criteria”, the framework shares, as this has the potential to lead to “retention of solutions that may not offer best value for money”. Recommendations for ensuring clinical safety include updating the DCB160 and ensuring its approval by the clinical safety lead, checking a product is registered with the MHRA if it is a medical device, and considering whether the product has been certified against British Standard 30440 – Validation Framework for the Use of AI in Healthcare. Procurement should take into account scalability, it goes on, and an evaluation plan should be developed to inform a business case to support this scale going forward.
The framework goes on to look at strategies for communication and engagement with the workforce and wider community, including driving awareness of AI and its potential benefits and risks. It also offers resources for organisations looking to improve staff training around the use and implementation of AI products.
To read the framework in full, please click here.
AI strategy and use cases across the health and care sector
Our latest HTN Now webinar focused on the practicalities of AI technologies, exploring topics including implementation, adoption, the role of data, policy, regulation, evaluation and best practices. With the help of our expert panellists, we also took a closer look at examples of AI in health and care.
A HTN Now panel discussion from last year looked at whether the reality of AI will live up to the current hype, and how to manage bias in healthcare data. Expert panellists included Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at HealthAI, the global agency for responsible AI in health. The session explored topics including what is needed to manage bias; what “responsible AI” really looks like; how to ensure AI is inclusive and equitable; how AI can help support underserved populations; the deployment of AI in the NHS; and the potential to harness AI in supporting the shift from reactive to proactive care.
For a recent poll on our LinkedIn page, we asked our audience what they think is the biggest barrier to responsible AI in healthcare – inadequate regulation, support for safe adoption, data and bias, or lack of evidence and evaluation? Support for safe adoption came out on top, receiving 41 percent of the vote, whilst in second place, with 25 percent of the vote, was inadequate regulation.
We caught up with Peter van Ooijen – professor of AI in Radiotherapy and coordinator of the Machine Learning Lab at University Medical Center Groningen, and former president of the European Society of Medical Imaging Informatics (EuSoMII) – for a recent interview, to talk about new technologies and future directions for medical imaging and radiotherapy.