News

Considerations for artificial intelligence and machine learning in drug development and manufacturing

The FDA has released a discussion paper designed to generate discussion about artificial intelligence and machine learning in drug development and manufacturing, entitled ‘Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products’.

The paper covers a number of key topics including current and potential uses of AI and ML in drug development; clinical research; post-market safety surveillance; advanced pharmaceutical manufacturing; and FDA experience with AI and ML in drug development.

It summarises its findings with a section dedicated to considerations for the use of AI and ML in drug development, which we explore here.

The document describes how AI and ML have been applied to a broad range of drug development activities and continues to evolve, noting that they have “the potential to accelerate the drug development process and make clinical trials safer and more efficient.”

However, the paper adds, it is important to assess whether the use of AI and/or ML introduces specific harms and risks; for example, algorithms can have the potential to amplify errors and pre-existing biases in underlying data sources, which can raise concerns about generalisability and ethics. In addition, “an AI/ML system may exhibit limited explainability due to its underlying complexity or may not be fully transparent for proprietary reasons.”

Overarching standards and practices for use of AI and ML

The paper acknowledges an increased commitment by the Federal Government and international community to facilitate AI innovation and adoption, and “as a result, efforts for the development of cross-sector and sector-specific standards to facilitate the technological advancement of AI have rapidly increased in both domestic and international forums”. It points to the publication of the ‘U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools’ from the National Institute for Standards and Technology as an example. This plan identifies several area of focus for AI standards development, including data and knowledge, performance testing and reporting methodology, risk management and trustworthiness.

Other standards organisations are developing relevant AI/ML standards and work products which address “fundamental issues of data quality, explainability and performance”, it continues, “in addition to examining applications that are specific to certain industries.” An example is the Verification and Validation (V&V 40) risk-informed credibility assessment framework, initially developed by the American Society of Mechanical Engineers, for the assessment of credibility of computational models used for medical devices; this was later adopted into model-informed drug development.

In addition to the V&V 40 Standard, the document highlights guiding principles to inform the development of good machine learning practices for medical devices that use AI/ML, jointly published by the FDA, Health Canada and the UK’s MHRA. The principles include adopting a total product life cycle approach in which multi-disciplinary expertise is leveraged throughout product development, with in-depth understanding of how the model will be integrated into clinical workflow, and emphasise the importance of adequate representation of age, gender, sex, race and ethnicity within the clinical study population. In addition, the principles highlight the need for sufficient transparency with regards to the product’s intended use and indications, the data used to test and train the model, and any known limitations. They also note the the need to monitor deployed models for performance whilst managing the risk of model retraining.

The FDA’s Centre for Devices and Radiological Health issued a proposed framework in 2019 for modifications to AI/ML-based software as a medical device, which suggests use of a predetermined change control plan mechanism, so that a sponsor can “proactively specify intended modifications to device software and the methods that will be used to ensure their safety and effectiveness”. It is hoped that this will “lay the foundation for AI/ML-enabled devices with improved capacity for adaption.”

Discussion of considerations

The paper shares how the FDA is considering approaches to provide regulatory clarity around the use of AI/ML in drug development. It acknowledges that whilst the above standards and practices can potentially be adapted to address the use of AI/ML in drug development, this context could also raise specific challenges and additional considerations.

It highlights an aim to initiate discussion with stakeholders and gather feedback on three key areas: human-led governance, accountability and transparency; quality, reliability and representativeness of data; and model development, performance, monitoring and validation.

For each of these areas, the paper adds, a risk-based approach could “include measures commensurate with the level of risk posed by the specific context of use for AI/ML”.

On human-led governance, accountability and transparency, the document states that a risk management plan that considers the context of use may be applied to identify and mitigate risks. This approach could help guide the level of documentation, transparency and explainability, and also provide “critical insight on the initial planning, development, function and modification of the AI/ML in the specific context of use”. It suggests a number of questions to put to stakeholders, including what specific use cases of AI/ML in drug development have the greatest need for additional regulatory clarity, what transparency means in the use of AI/ML in drug development, and the main barriers and facilitators of that transparency.

Looking at quality, reliability and representativeness of data, the paper provides a number of considerations. It encourages exploration of the three categories of bias that may exist in data (human, systemic and statistical/computational), along with the integrity of the data, the privacy, the record trail that accounts for the origin of data, its relevance, replicability, reproducibility and representativeness. A key question for stakeholders is to ask what practices developers and manufacturers are currently utilising to assure integrity, data privacy and bias identification.

Finally, coming to model development, performance, monitoring and validation, the paper notes that it is important to consider use of the model and model risk and credibility, along with context of use. To balance performance and explainability, considering the complexity of the model is encouraged, along with monitoring the model and documenting the monitoring efforts to ensure that it is reliable, relevant and consistent over time. The paper adds that data considerations should also include details of the training dataset utilised to develop the model.

As a final consideration, the paper encourages that questions are asked around: the current tools, processes, approaches and best practices being used by stakeholders for documenting development and performance, the selection of model types and algorithms, how the use of specific approaches for validating models and measuring performance is determined, how transparency and explainability are evaluaed, how issues of accuracy and explainability are addressed, the considerations for selecting open-source AI software, and the use of real-world data performance in monitoring AI and ML.

Read the paper in full here.