News

“Building and maintaining trust is crucial to realising the benefits of AI” – CDEI publishes portfolio of AI assurance techniques

The Centre for Data Ethics and Innovation, part of the Government’s Department for Science, Innovation and Technology, has published its “CDEI portfolio of AI assurance techniques”, detailing its portfolio of AI assurance approaches and how to use it.

“The portfolio is useful for anybody involved in designing, developing, deploying or procuring AI enabled systems, and showcases examples of artificial intelligence AI assurance techniques being used in the real-world to support the development of trustworthy artificial intelligence”, the agency states.

Recognising the importance of AI in future-proofing organisations and industries nationally, the CDEI’s guidance notes the role that trust will play in realising the technology’s benefits. It states: “Building and maintaining trust is crucial to realising the benefits of AI. Organisations designing, developing, and deploying AI need to be able to check that these systems are trustworthy, and communicate this clearly to their customers, service users, or wider society.”

The published guidance sets out a range of different assurance techniques which can be used to measure, evaluate and communicate the trustworthiness of AI systems, including:

  • Impact assessment: Used to anticipate the effect of a system on environmental, equality, human rights, data protection, or other outcomes.
  • Impact evaluation: Similar to impact assessments, but are conducted after a system has been implemented in a retrospective manner.
  • Bias audit: Assessing the inputs and outputs of algorithmic systems to determine if there is unfair bias in the input data, the outcome of a decision or classification made by the system.
  • Compliance audit: A review of a company’s adherence to internal policies and procedures, or external regulations or legal requirements. Specialised types of compliance audit include system and process audits and regulatory inspection.
  • Certification: A process where an independent body attests that a product, service, organisation or individual has been tested against, and met, objective standards of quality or performance.
  • Conformity assessment: Provides assurance that a product, service or system being supplied meets the expectations specified or claimed, prior to it entering the market. Conformity assessment includes activities such as testing, inspection and certification.
  • Performance testing: Used to assess the performance of a system with respect to predetermined quantitative requirements or benchmarks.
  • Formal verification: Establishes whether a system satisfies some requirements using the formal methods of mathematics.

Following on from the government’s National AI Strategy, which recognised “the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors”, and the CDEI’s “Industry Temperature Check” report which looked at the demand and uptake of AI across different industries; this guidance presents a range of real-world case studies in order to demonstrate how AI is being implemented responsibly across multiple sectors.

Qualitest: Supporting NHS England future-proof their QA practices with AI

One of the case studies selected for inclusion in the CDEI’s portfolio of AI assurance techniques is that of Qualitest’s “Supporting NHS England to Future-Proof Their QA Practices with AI“.

The case study had the identified aims of helping NHS England to understand the current state of the industry’s awareness and readiness for the assurance of AI & ML powered technology; identifying the new approaches and refinements to existing approaches required to assure the quality of AI software; and understanding the ethical, fairness and bias risk of AI-enabled medical technology and having the process in place to guard against them being introduced to the software landscape.

“NHS England wanted to ensure that as AI-enabled solutions arrive in their landscape, they have the existing collateral, processes, quality enablers and accelerators ready to ensure they are able to reliably assure these technologies in the safety-critical context of healthcare. NHS England’s main concern was that if they did not take a forward-looking stance and tried to reactively learn in-project, there would be the risk of delays to the rollout of vital IT solutions that would have internal and potentially clinical implications.”

Specialist data scientists from Qualitest liaised with NHS England to understand the delivery of software into their landscape and where to augment existing assurance approaches for AI technology. Based on experience, knowledge and current AI best practice, Qualitest then drew up a list of materials on the lifecycle of ML systems,  along with details of “strategic and tactical quality considerations that must be met in each stage of that lifecycle”. This was aligned with the CRISP-DM industry standard lifecycle, as this was identified as one which shares similarities to the traditional software lifecycle and “would simplify adapting existing processes”.

“At each phase of the lifecycle from business understanding, through modelling and deployment, Qualitest gave a detailed picture of where defects could be introduced, what they might exhibit as, and the quality processes required for defect prevention and testing.”

Qualitest took into account a series of relevant cross-sectoral regulatory principles, including: safety, security & robustness; appropriate transparency & explainability; fairness; accountability & governance; and contestability & redress.

The case study noted: “NHS England required support from a quality assurance company that was able to help them understand their challenges and create processes for their QA strategy for AI and ML. This meant that a consultative approach by an organisation that was highly experienced in both software assurance and successful AI/ML implementations, that could call on that experience to work in partnership with them at a strategic level.”

It summarised to add: “After engaging with Qualitest, NHS England has taken leading steps to prepare for the increasing adoption of AI & ML technology in the healthcare landscape. They are aware of the unique and diverse challenges that these systems take and have taken proactive steps to prepare their quality approaches around them. This will maximise their speed-to-market and minimise their risk in rolling out these solutions.”

Best Practice AI: Developing an explainability statement for an AI-enabled medical symptom checker

Another of the case studies selected by CDEI to explore AI assurance was Best Practice AI’s “Developing an explainability statement for an AI-enabled medical symptom checker“.

The case study focused on an AI-enabled symptom checker, which takes user data about medical symptoms and answers to a selection of questions, and provides a report which suggests possible causes and next steps. It developed an AI explainability statement to provide users with transparent information relating to the workings of the AI technology.

“The purpose of the AI explainability statement is to provide end users, customers and external stakeholders (including regulators) with transparent information, in line with the GDPR expectations set by the UK’s data protection regulator, the Information Commissioner’s Office. The document provides, for example, clear insight into the purpose and rationale for AI deployment, how it has been developed and what data has been used in design and delivery, insight on governance and oversight procedures, clarity on consideration and management of ethical and bias issues.”

The explainability statement is a non-technical explanation of how and why the symptom checker uses AI, aimed at customers, regulators and the wider public.

The case study said it’s: “To inform the preparation of an explainability statement clients will have already invested in techniques such as various audits (e.g. bias audit). This approach requires firms to document their internal processes including governance and risk management. Where there appear to be gaps or potential issues these are flagged by the external team for future improvement. The main output is a document published on the organisation’s website to provide maximum transparency to external stakeholders.”

It goes on to highlight that they identified a pressing need to explain AI systems to a wide range of stakeholders, “in a language that is readily accessible and in a format that can be understood by non-technical readers”. The case study notes the importance of producing an AI explainability statement for organisations to ensure that their internal procedures, approaches and governance can be scrutinised externally, and so as to prepare for anticipated future developments in regulatory requirements.

“There is also a growing regulatory focus around the need for transparent and explainable AI. This is described in the ICO’s explaining decisions made with AI guidance. AI explainability statements are public-facing documents providing transparency in order to comply with global best practices and AI ethical principles, as well as legislation. In particular, AI explainability statements aim to facilitate compliance with Articles 13, 14, 15 and 22 of GDPR for organisations using AI to process personal data.”

Benefits to the organisation identified by the case study included providing transparency for external stakeholders, building and maintaining public trust in AI systems in use, providing a benchmark for internal stakeholders on where best practice would suggest need for improvements, and demonstrating compliance with relevant regulation.

What’s the greatest challenge for AI in healthcare? 

A poll carried out by HTN in March 2023 identified “Regulation and Governance” as the biggest challenge for AI in healthcare, indicating that interest in the topic of AI assurance is likely to be high.

Our audience was asked to choose from the following four options: funding streams, skills and domain knowledge, regulation and governance and data and infrastructure. Participants in the poll came from a range of backgrounds and professions, from digital technology, clinical, project management, information security and biomedical science.