The Medicines and Healthcare products Regulatory Agency (MHRA) has updated its ‘Software and AI as a Medical Device Change Programme’.
The programme, originally published last year, aims to ensure regulatory requirements for software and AI are clear and that patients are protected. A new roadmap has been added including deliverables to meet objectives, and to further the work to ensure AI and software used for medical devices are safe for patients.
The roadmap sets out the programme along with specific deliverables to address key problems. Here, we explore each of the work packages along with the actions planned to tackle the highlighted problems.
Qualification: there is currently a lack of clarity as to what qualifies a SaMD (software as a medical device) and clarity is required for “appropriate, effective and proportionate regulation”.
To address this, guidance will be published in a variety of areas including bringing clarity to clarify distinctions such as SaMD versus wellbeing and lifestyle software products; medical devices versus medicines and companion diagnostics; custom-made devices; and more.
Noting that the “cornerstone to almost any activity required by medical device regulation is having a well-crafted and defined intended purpose”; guidance will also be developed assisting manufacturers in areas such as avoiding common pitfalls when defining intended purposes; clearly articulating populations within the scope of the intended purpose; and understanding how intended purposes link to quality management systems, risk management and other processes.
Finally, the MHRA acknowledges that deployment and modifications can make it unclear what entity counts as ‘the manufacturer’, so it plans to develop and publish a policy position on the implications for this.
Classification: at present, the Medical Device Regulations 2002 do not classify software proportionate to the risk it might pose to patient and public safety.
The roadmap aims to ensure that classification rules closely follow the risk that specific SaMDs may pose to public and patient safety, when known; to impose safety and performance requirements; and to ensure that those rules provide sufficient flexibility to address risk profile of novel devices without needless restriction on innovation.
To meet these objectives, legislation will be implemented that more closely align rules with the IMDRF Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations.
In addition, the MHRA is working with NICE on their Evidence Standards Framework to make sure this aligns with the SaMD classification rules.
There are plans to explore an ‘airlock process’ for SaMD which would allow software to generate real-world evidence for a limited time period, under close monitoring. In addition, guidance will be published outlining key terms to ensure that the new classification rules can be sensibly and consistently interpreted.
Premarket requirements: there is a need for clearer premarket requirements that fit software to ensure a smoother path to market for manufacturers, and better protection for patients and the public.
Objectives in this area are to ensure that SaMD are supported by adequate data on safety, effectiveness and quality prior to being placed on the market; to ensure that premarket requirements are clear in how they apply to SaMD; and to ensure that any premarket submission includes all necessary information.
Deliverables include a review of essential requirements for software, to ensure that they provide robust assurance that the software meets its intended purpose and is safe; there will also be a review and guidance on best practice for SaMD development.
Existing guidance on retrospective non-interventional performance studies for SaMD will be updated to indicate whether these studies qualify as clinical investigations, which will clarify what needs to be notified or registered with MHRA.
MHRA will work with the Health Research Authority (HRA) to produce joint regulatory guidance highlighting the main points of consideration from both organisations, the major processes that need to be undertaken and key points at which to begin these processes, and more.
Another deliverable in this area is around human-centred SaMD, referring to the “approach to interactive systems that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques”. Here, regulatory guidance will be produced clarifying the importance of these human factors, linking this literature to essential requirements, and expanding upon the guidance for best practice development and deployment.
The final action here focuses on regulatory guidance for registration and nomenclature for SaMD. The guidance notes that “existing nomenclature as it applies to software can be broad and imprecise, thereby impairing effective signal detection for these devices”, and adds that MHRA will work to advance this towards “sufficient granularity to better enable signal detection and post-market trending”.
Post market: MHRA state that the safety signal for SaMD needs to be stronger, and the post market surveillance system needs to be adapted so that signals are “received and not lost to noise”.
The objectives listed include considering how to utilise real world evidence to provide further assurance that SaMD functions as intended, along with strengthening the post market surveillance system to support quicker and more comprehensive capture of adverse incidents. In addition, clear change management requirements for SaMD must be articulated.
To meet these objectives, MHRA share how they have already conducted a preliminary analysis of the individual case reports identified for SaMD, and add that they will consider what additional sources of data would support detection of safety signals; consider how best to highlight the benefits of reporting adverse events via the ‘Yellow Card Scheme‘; consider how to better curate and analyse data; and consider revision to standard operation procedure and processes so that the MHRA recognises risks early and responds swiftly.
Guidance will be produced clarifying what constitutes a reportable adverse incident; reporting requirements; that harm may occur because of action taken or not taken based on results provided by the device; appropriate next steps depending on the incident in question; and the importance of ensuring populations within the intended purpose are considered within vigilance.
With regards to change management for SaMD, guidance is to be produced covering the key principles of change management in software and how this links into best practice; how it links to quality management systems, risk management, and other requirements; that change should be anticipated, planned and documented; that there is an obligation to actively monitor and collect information; and that intentional change can be distinguished by the predetermined change control plan.
Looking at predetermined change control plans and change protocols in more detail, the MHRA shares how it will craft a procedure that details a change management process to anticipate change in software over time; this is likely to include two primary elements, SaMD Pre-Specification and Algorithm Change Protocol. Details will be provided on what metrics to track and how to agree performance ‘bands’, and how the change management process links to aspects of the product lifecycle to maximise operational effectiveness will be outlined.
Finally, the MHRA notes that whilst it is common for SaMD to expand the intended purpose, population or users, this expansion must be supported by corresponding expansion in clinical evidence, proper documentation, and reporting. Regulatory guidance in this area is to be developed.
Cyber secure medical devices: existing medical device regulations do not currently consider the evolving risks presented by cyber security vulnerabilities, or the operational issues presented by legacy software.
The MHRA aims to articulate how cybersecurity issues translate to SaMD issues; to ensure that cybersecurity is adequately reflected in requirements and surveillance; and to work closely with other bodies to ensure that SaMD cybersecurity policy capitalises on synergies.
Secondary legislation is to be developed to align with the Connected Medical Device Security Steering Group principles, to be consistent with and build upon complementary requirements such as NHS Digital Technology Assessment Criteria requirements, and to harmonise with international best practice.
More guidance is to be produced on cybersecurity and related requirements to position cybersecurity in the context of lifecycle management processes; to make it clear that cybersecurity and related issues constitute shared responsibilities between manufacturer, systems manufacturer, and deployment organisation; to clarify specific requirements that primary fall to the manufacturer; and to clarify necessary reporting of cybersecurity vulnerabilities.
With regards to management of unsupported software devices, the guidance notes: “We will produce best practice guidance for manufacturers, system providers, and local deploying organisations on how best to manage the risks that unsupported devices might present.”
In addition, MHRA state that they currently receive “relatively few reports of cybersecurity vulnerabilities connected to medical devices”, but highlights a need to “improve reporting procedures to enable proactive mitigation of harm”. As such, they will work closely with the industry on making these improvements.
AI rigour: there is currently a lack of clarity on how best to meet medical device requirements for products utilising artificial intelligence to ensure that they achieve the appropriate level of safety and meet their intended purpose.
To combat this, the MHRA lays out objectives to utilise existing regulatory frameworks to AIaMD (artificial intelligence as a medical device) is supported by robust assurance that it is safe and effective; to develop supplementary guidance to ensure AIaMD placed on the market is supported by such robust assurances; and to outline technical methods to test AIaMD on safety and effectiveness.
In terms of deliverables, the guidance begins by sharing how in October 2021, ten guiding principles called “the basic principles of good machine learning practice” (GMLP) were published in collaboration with FDA and Health Canada.
Furthermore, a document will be produced to map how GMLP principles can be tied into key responsibilities in the Medical Device Regulations 2022 and relevant guidance. There will also be a complementary document providing a current snapshot of the standards landscape as it relates to meeting the internationally agreed GMLP principles.
There will be guidance produced outlining high-level best practice AIaMD development deployment methods, and guidance clarifying and expanding upon the GMLP regarding ensuring representativeness of clinical study participants and datasets.
Alongside this, the MHRA will assist in developing standards, frameworks and tools to assist with the identification and measurement of bias. An approach will also be developed to identify under-represented features in data with the aim of using synthetic data to oversample the under-represented feature and achieve a better overall distribution.
AI interpretability: current requirements do not take into account adequate consideration of human interpretability and its consequences for safety and effectiveness.
Here, the MHRA aims to articulate how opacity of AiMD translates into safety, effectiveness and quality concerns; to develop guidance regarding interpretability to ensure AI models are sufficiently transparent and therefore reproducible and testable; and to develop guidance regarding interpretability.
To action this, guidance will build upon human-centred SaMD guidance, emphasising that it is the performance of the human-AI team and not just the performance of the model itself that is key to ensuring safety and effectiveness.
In addition, the MHRA will support wider work on tools and a framework to ensure that AI is trustworthy, working with a range of bodies including the Trustworthiness Auditing for AI project and Health Education England.
AI adaptivity: existing requirements and processes around notification and management of change need to fit and be streamlined for AIaMD.
In this area, objectives include articulating problems of fit with medical device regulation for adaptive AiaMD, with focus on distinguishing between models that are locked, batch-trained, or continuously learning; clarifying how adaptive AIaMD of each type could fit within existing change management processes; and crafting new guidance where appropriate for adaptive AIaMD.
A paper on guiding principles is to break down the types of change management issues currently seen in categories of product adaptivity; these issues will be reviewed and the challenges linked to existing to developing regulatory approaches to address them.
In addition, the MHRA plans to “explore the use of concept drift in determining when a significant or substantial change in performance has occurred”. Data that has been highly volatile to change (such as data collected on COVID) and assumed stable data will be utilised along with a range of AI models and analysis techniques, with the aim of developing a methodology to determine significant change in AIaMD.
Finally, proposals to implement predetermined change control plans for SaMD will be reviewed for AIaMD, with suitability and applicability reviewed in the context of common challenges facing AIaMD such as poor interpretability and data bias.
Speaking on the new roadmap, Dr June Raine, Chief Executive at the MHRA, commented: “We’re building a new concept as an enabler rather than a watchdog, an enabling regulator that works in partnership, not in isolation, while all the time retaining independence.”
Read the roadmap in full here.