News

Government publishes frameworks and risks toolkit for the implementation of generative AI

The government has published a series of human-centred frameworks alongside a practical toolkit for the safe implementation of generative AI, featuring nine tips for leaders that are “critical to success”. Based on three stages of adopt, sustain, and optimise; the recommended approach is informed by the deployment of a generative AI tool, Assist, across over 200 government organisations.

Leaders are encouraged to “lead by example” by using tools in their own work, to foster a culture of innovation and continuous learning within their organisation, and to recognise and reward use by acknowledging teams or individuals who are using tools effectively. Investing in AI literacy and usage training is vital, according to the guidance, as is allocating adequate resources to delivery teams, implementing robust monitoring and risk management methods, and maintaining organisational safeguards.

“There has been very little practical guidance for organisations struggling to bridge the gap between making AI tools available to their employees and regular and high impact use of the tools provided,” the guidance states. “We need to bridge the gap between technological innovation and human adoption.”

Phase one, adopt, focuses on encouraging uptake of AI solutions, recommending that organisations clearly outline steps to be taken by users to support adoption, identify ways to simplify steps in the adoption journey to mitigate drop-off, establish metrics to identify adoption success, and track user actions to identify drop-off. It points to ways of understanding inequalities in adoption, to the need to develop a plan for measuring whether solutions are effective, and to the use of co-design in informing changes and improving features to overcome barriers to adoption.

Phase two, sustain, looks at ensuring AI applications are used routinely and embedded within everyday tasks, suggesting the use of metrics and dashboards to measure usage, regular research and feedback sessions to identify barriers to routine use, and the development of a user journey map to identify areas requiring further support. Developing a mixed methods approach to measuring efficiency benefits is also recommended, based on the kind of work the tool is supporting and the needs of the organisation relating to AI.

On phase three, optimise, this considers how to ensure high quality and safe use. “While promoting the adoption and regular use of AI tools is essential, it is equally crucial that users use these tools effectively and safely,” the guidance states. It goes on to outline practical steps organisations can take toward this goal, including the analysis of use cases and user research, working with teams to identify practical ways to embed the approach within team processes, developing continuous training and tailored support, providing support for leaders, and supporting relevant expertise and skills.

The accompanying hidden risks toolkit for AI is designed to support the assessment of barriers to safe adoption, pre-empt risks from scaling, aid in the design of effective training, and help organisations with ongoing monitoring. It outlines three approaches: de-risking the tool using technical measures; ensuring human oversight; and assigning risk ownership. Six categories of hidden risk in organisational AI rollout are also identified, including inaccurate quality assurance, task-tool mismatch, perceptions, workflow or organisational challenges, ethics, and technological over-reliance.

Wider trend: Harnessing AI for health and care

The US Food and Drug Administration has introduced a generative AI tool following a successful pilot, with the intention of modernising the agency’s functions and leveraging AI to “better serve” US citizens. Marty Makary, FDA commissioner, shares that the agency has now set out an “aggressive timeline” for rollout agency-wide by 30 June.

Health Innovation Hub Ireland (HIHI) has announced the launch of a new national initiative called HIHI.AI Call 2025, said to support the “development and testing of AI solutions that can make a real impact in Ireland’s healthcare system”. As part of the initiative, HIHI is looking for input from companies, startups, researchers, clinicians and industry leaders who are already focused on developing AI-powered healthcare solutions. Interested suppliers have been asked to take part in an AI in healthcare competition, leading to an opportunity to share and pilot their AI innovations in “real-world clinical settings”, while also having access to HIHI’s national support network.

Guy’s and St Thomas’ NHS Foundation Trust has entered a partnership with Aneira Health to offer AI-enabled healthcare for women working for the NHS, focusing on fitting around busy lives, improving access, and delivering personalised, proactive care. The service is expected to launch this spring, with a small cohort of the trust’s workforce involved in testing the approach.

OpenAI, the AI research and deployment company responsible for ChatGPT, has launched a new benchmark for evaluating the capabilities of health AI systems, built in partnership with 262 physicians practicing across 60 countries. The company shares findings that large language models “have improved significantly over time and already outperform experts in writing responses to examples tested in our benchmark”.