The UK Government has published its AI Opportunities Action Plan, accepting recommendations for expanding computing capacity, establishing AI growth zones, unlocking data assets, and sharing alongside a proposed delivery timeline.
In a foreword, Prime Minister Keir Starmer highlights the “defining opportunity of our generation”, outlining the potential of AI to “turbocharge every mission in this government’s Plan for Change”, and pointing to the need to “make our country an AI superpower”.
Some of the 50 recommendations set out by the plan include the development of a long-term plan for the UK’s AI infrastructure backed by a 10-year investment, with the DSIT to publish a long-term compute strategy in spring 2025 and committing to setting out a 10-year roadmap.
Actions include establishing “AI Growth Zones”, with the first to be delivered at Culham, the headquarters of the UK Atomic Energy Authority, with further zones to be announced by spring 2025. Mitigating the sustainability and security risks of AI infrastructure is noted, with the DSIT to set out how the UK will address these challenges as part of its long-term compute strategy in spring 2025.
On data assets, accepted recommendations include rapidly identifying “at least 5” high impact public data sets to make available to AI researchers and innovators, with the DSIT reportedly planning to explore this as it develops the National Data Library, with further details to be published by summer of 2025.
It continues to note the action to strategically shape what data is collected, rather than “just making data available that already exists”, with the DSIT to explore this as part of the development of the National Data Library and its wider data access policy with further details expected by summer 2025. Building public sector data collection infrastructure and financing the creation of “new high-value data sets” is highlighted, which will again be explored as part of the National Data Library.
Recommendations accepted under training, attracting, and retraining the “next generation of AI scientists and founders” include the accurate assessment of the size of the skills gap, with the government committing to bringing stakeholders together to assess skills gaps and “map pathways by which they can be filled”. It adds the focus to expand education pathways into AI by bringing businesses, training partners and unions together with national and local government to meet industry workforce digital and AI skills needs by autumn 2026.
Looking to enabling “safe and trusted AI development and adoption”, agreed upon recommendations include continuing to support and grow the AI
Safety Institute (AISI), with the DSIT to confirm the institute’s funding through upcoming spending reviews and intentions to establish the AISI as a statutory body by spring 2025.
Another area highlighted is working with regulators to “accelerate AI in priority sectors and implement pro-innovation initiatives like regulatory sandboxes”, with the DSIT to identify priority sectors with “high growth potential” and work with regulators to identify “pro-innovation initiatives”, and update on progress in summer 2025.
An AI assurance ecosystem to promote trust and adoption is noted, with the DSIT looking to prioritise additional funding for AISI’s systemic AI safety programme at spending reviews, as well as exploring other opportunities to grow the domestic AI safety market, offering a public update by spring of 2025.
The plan also sets out recommendations toward adopting a “scan > pilot > scale” approach. Here it notes to appoint an AI lead for each of the UK’s missions to help identify where AI could be a solution and to develop “cross-government, technical horizon scanning and market intelligence capability”.
The need to encourage two-way partnerships with AI vendors and startups to “anticipate future AI developments and signal public sector demand” is highlighted. To support this the recommendations include building a “data-rich experimentation environment including streamlined approach to accessing data sets, access to language models and necessary infrastructure like computing capacity”.
To support vendors, the notion of a “scaling service” for successful pilots is highlighted, with senior support and central funding, with the DSIT to “scope” this and offer an update by autumn of 2025.
Other recommendations focus on enabling the public and private sectors to reinforce one another, addressing private sector user adoption barriers, and advancing AI, with recommendations such as publishing best practice, results, and case studies through an “AI Knowledge Hub” to be piloted in summer 2025. Driving AI adoption across the whole country is recommended, with the DSIT to work with devolved and local government to identify opportunities to incorporate AI into local growth plans on a continuous basis from summer 2025; and creating a new unit to deliver on the mandate of “maximising the UK’s stake in frontier AI”, with the government committing to share further details by spring of 2025 of a new function “which will draw on wider government functions to partner with AI companies”.
To read the government response in full, please click here.
AI regulation and innovation from across the UK health sector
A HTN Now panel discussion from last year looked at whether the reality of AI will live up to the current hype, and how to manage bias in healthcare data. Expert panellists included Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at HealthAI, the global agency for responsible AI in health. The session explored topics including what is needed to manage bias; what “responsible AI” really looks like; how to ensure AI is inclusive and equitable; how AI can help support underserved populations; the deployment of AI in the NHS; and the potential to harness AI in supporting the shift from reactive to proactive care.
We asked our LinkedIn followers for their thoughts on the biggest concern for AI in healthcare: equitability, bias, transparency or regulation? 52 percent of over 100 voters highlighted regulation as their main concern, with 21 percent voting for bias.
Somerset NHS Foundation Trust’s AI policy was recently shared by the trust’s chief scientist for data, operational research & artificial intelligence, focusing on the need for safe integration and an approach balancing innovation with ethical and legal responsibilities.
In November, NICE launched a new reporting standard designed to help improve the transparency and quality of cost-effectiveness studies of AI technologies, in a move it hopes will “help healthcare decision-makers understand the value of AI-enabled treatments” and offer patients “faster access to the most promising ones”.
The DSIT published a report examining the UK’s AI assurance market, looking at the current landscape, opportunities for future market growth, and potential government actions required to support this “emerging industry”. Informed by an industry survey, interviews with industry experts, focus groups with members of the public, and an economic analysis of the AI assurance market; the report ultimately finds that there are opportunities to drive the market forward by tackling challenges relating to demand, supply, interoperability, lack of understanding, lack of “quality infrastructure”, limited access to information, and a “fragmented” AI governance landscape.
And the MHRA selected five new technologies as part of the AI Airlock scheme, “to better understand how we can regulate artificial intelligence powered medical devices”. This includes medical devices for cancer, chronic respiratory disease and radiology diagnostic services.