The UK government has published a response to a report from the Regulatory Horizons Council (RHC) about the regulation of AI as a medical device (AIaMD), accepting all 15 of its recommendations, and offering insight into current work and next steps in this space.
Highlighting that AIaMD has “advanced significantly in recent years”, Baroness Merron, parliamentary under-secretary of state for patient safety, women’s health and mental health, presented the response, adding: “To ensure products are safe and effective we are developing best-in-class regulations which uphold safety standards, encourage innovative and sustainable product development, and drive better outcomes for patients.”
The Baroness also shared details of her team’s upcoming liaison with the RHC “with a view to formally commission a review of generative AI in healthcare to ensure we are able to address the pressures mounting on the NHS, while drawing on the opportunity to catalyse the adoption of AI in healthcare through proportionate and innovative regulation”.
Recommendations made in the RHC’s report cover regulatory capacity and capability, including that the Medicines and Healthcare products Regulatory Agency (MHRA) should be “specifically resourced with a long-term funding commitment” to enable the agency to effectively “create and service a regulatory framework that is efficient, proportionate and safe”. The government accepts this in principle, stating that it “continues to support the appropriate funding required for the operation of the MHRA” and pointing to £1 million in funding toward the launch of AI Airlock, a regulatory sandbox for AIaMD, which is set to run to spring 2025.
A further recommendation relates to regulatory capacity and capability in AIaMD, which the government accepts, noting “the MHRA is supportive of this recommendation and is working to expand its regulatory capacity in AI within existing resources”.
Moving on to consider the whole product lifecycle, accepted recommendations include that the UK should aim for an AIaMD regulatory framework that is “legislatively light”, maximising the role of standards and guidance, and building on existing regulations for software as a medical device “while also addressing the specific challenges of AI technologies”. The legal framework is undergoing updates, according to the government’s response, with changes remaining “light-touch for software and AI products”.
A further recommendation relates to the need for manufacturers to be required to provide evidence they have evaluated and mitigated risks relating to poor generalisability and AI bias. The response considers that these risks “would be covered by the essential requirements which manufacturers are required to comply with under regulation 8 of the UK MDR 2002″, adding: “Manufacturers must produce documented evidence against these requirements for their product’s intended use.”
Other recommendations accepted by the UK government include the need for manufacturers to “work with health institutions to provide evidence that the AIaMD is likely to perform safely within their local setting” prior to deployment, that key stakeholders and manufacturers should collaborate to “create standards that ensure that post-market monitoring of performance and safety”, and that an agreement should be reached as part of contractual negotiations between health institutions and manufacturers prior to deployment on the approach for monitoring and responding to performance and safety issues.
As part of the government’s responses to these recommendations, it shares that the MHRA has established relationships with international medical device regulatory bodies such as the FDA and Health Canada, is working with standards bodies such as the British Standards Institution, and is currently developing an “international reliance framework” to help promote access to medical devices and “reduce the duplication of regulatory effort for businesses”.
Overcoming challenges around AI regulation, effective adoption, and more
A recent HTN Now webinar focused on the practicalities of AI technologies, exploring topics including implementation, adoption, the role of data, policy, regulation, evaluation and best practices. With the help of our expert panellists, we also took a closer look at examples of AI in health and care. Panellists included Neill Crump, digital strategy director at The Dudley Group NHS Foundation Trust; Lee Rickles, CIO, director and deputy SIRO at Humber Teaching NHS Foundation Trust; and Beatrix Fletcher, senior programme manager (AI) at Guy’s and St Thomas’ NHS Foundation Trust.
A poll over on our LinkedIn page asked our audience what they thought was the biggest barrier to responsible AI in healthcare – inadequate regulation, support for safe adoption, data and bias, or lack of evidence and evaluation? Support for safe adoption came out on top, receiving 41 percent of the vote, whilst inadequate regulation received 25 percent of the vote, including from audience members working in compliance and project management.
The MRHA selected five new technologies as part of the AI Airlock scheme, “to better understand how we can regulate artificial intelligence powered medical devices”, including medical devices for cancer, chronic respiratory disease and radiology diagnostic services. The selection process followed an industry-wide call for applications, with candidates having to demonstrate the potential of their AI-powered device. The pilot forms part of MHRA regulatory “sandbox”, said to support manufacturers to explore “how best to collect evidence that could later be used to support the approval of their product”.