The Department for Science, Innovation & Technology (DSIT) has published a report examining the UK’s AI assurance market, looking at the current landscape, opportunities for future market growth, and potential government actions required to support this “emerging industry”.
The report begins with a ministerial forward from Peter Kyle, the secretary of state for science, innovation and technology, who notes the need to ensure “conditions are right for global AI companies to want to call the UK home”, citing “rapid improvements in AI capabilities” which have “made the once unimaginable possible”, such as in finding new ways of identifying and treating disease.
Kyle goes on to highlight the importance of AI assurance in providing “the tools and techniques required to measure, evaluate, and communicate the trustworthiness of AI systems”, increasing confidence and further promoting the adoption of safe and responsible AI across the country.
Informed by an industry survey, interviews with industry experts, focus groups with members of the public, and an economic analysis of the AI assurance market; the report ultimately finds that there are opportunities to drive the market forward by tackling challenges relating to demand, supply, interoperability, lack of understanding, lack of “quality infrastructure”, limited access to information, and a “fragmented” AI governance landscape.
To address challenges around demand, the report sets out government actions to address knowledge gaps and develop an AI Essentials Toolkit to help startups and SMEs “engage with good practice in trustworthy AI”; whilst on supply, it commits to “addressing information asymmetries” and promoting collective action to increase third-party supply, as well as to boost confidence and trust; and to tackle interoperability it notes a requirement to drive common understanding and develop a terminology tool for responsible AI to help support cross-border interoperability and trade.
Some of the key findings from the report include a lack of understanding of existing regulations which serve as a barrier to uptake of AI assurance; issues with assessing the quality of available tools and services; and the need for a “common understanding” of what AI assurance is and how AI systems can be “assured in practice”.
Actions to be taken by the DSIT in responding to these findings include the development of an AI assurance platform to support AI developers in navigating the “complex landscape”, incorporating tools such as a free baseline assessment of organisational good practice; and the development of a roadmap with industry stakeholders to set out the department’s vision and actions to be taken to achieve it.
The report closes with a call for collaboration and collective action from stakeholders across the AI assurance ecosystem, to support the industry’s growth and “help to ensure the safe, equitable and responsible development and deployment of AI”.
To read the report in full, please click here.
The wider trend around AI assurance and evaluations
Toward the end of October, NHS England shared guidance on evaluating artificial intelligence projects and technologies with learnings from the ‘Artificial Intelligence in Health and Care Award’, which ran for four years until 2024 and supported the design, development and deployment of “promising” AI technologies.
The National Institute for Health and Care Excellence also recently launched a new reporting standard designed to help improve the transparency and quality of cost-effectiveness studies of AI technologies, in a move it hopes will “help healthcare decision-makers understand the value of AI-enabled treatments” and offer patients “faster access to the most promising ones”.
Somerset NHS Foundation Trust’s final version of its AI policy was also published, focusing on the need for safe integration and an approach balancing innovation with ethical and legal responsibilities. The policy outlines that whilst at present “even the best and most complex AI models (Siri, Alexa, Chat-GPT, etc.) do not surpass narrow AI…we must ensure protections are in place for future developments”.
AI use cases from across the NHS
In October, the UK government awarded £12 million in funding for projects utilising innovative technologies such as AI, VR and wearable sensors in supporting people with drug addictions and reducing drug-related deaths. Projects receiving funding include a remote monitoring platform capable of detecting respiratory issues related to opioid overdose, featuring a biosensor paired with a mobile device which allows for “the immediate alerting” of local naloxone carriers and emergency services.
University Hospitals Coventry and Warwickshire shared insight into how the trust is utilising artificial intelligence with the aim of improving patient experience, with focus on a collaborative project designed to reduce the number of missed appointments.
HTN also took a closer look at some of the latest use cases and research around artificial intelligence across the NHS, including in predicting disease development, detecting lung cancer, analysing brain tumours, and more.
And HTN hosted a panel discussion exploring whether the reality of AI will live up to current hype along with examining how bias in healthcare data can be managed, featuring experts from the field: Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at HealthAI, the global agency for responsible AI in health.