The US Department of Health and Human Services (HHS) has published an AI strategy and accompanying compliance plan, to utilise leading technologies to enhance efficiency, foster American innovation, facilitate data and best practice sharing, standardise risk practices, and embrace a “try-first” culture.
Sharing a preliminary assessment of AI maturity across the organisation, HHS notes that its divisions have embraced many facets of AI, with use cases that empower its workforce. There were 271 active or planned AI implementations in the 2024 financial year, with estimates that new use cases will increase by around 70 percent for the 2025 financial year.
In its compliance plan, HHS identifies opportunities across several domains to minimise barriers to AI implementation, including in funding and procurement, IT infrastructure, product development, operations and maintenance, data management and sharing, cybersecurity, and workforce development. To maximise these opportunities, it highlights actions to be taken in defining and sharing best practices, accelerating the “Authority to Operate” process to speed up implementation, and putting in place internal policies to encourage piloting and the scaling of use cases.
“Through AI innovation, improving AI governance, and fostering public trust in federal use of AI, HHS will update overly burdensome processes in favour of greater efficiency and effectiveness, and the Department will better fulfil its mission of improving the health and well-being of the American people,” it states.
Areas of focus for the strategy include strengthening governance and risk management, and developing a OneHHS AI-integrated Commons to provide “rapid and reusable AI innovation at lower cost” through shared data resources, computing power, models, and testbed environments. It similarly looks to equip the HHS workforce with the necessary skills and AI tools to reduce administrative burdens, and to accelerate research and translation through “embedding the principles of Gold-Standard Science into AI development and deployment”. An outcomes-first approach will be taken to integrating AI to improve both individual and population health.
Around governance and risk management, HHS sets out to standardise “minimum risk practices” for AI, including pre-deployment testing, impact assessments, independent review, monitoring, and safe termination if non-compliant. An AI use case inventory will be maintained, and an AI governance board has been established.
When it comes to designing infrastructure and platforms for user needs, HHS aims to deliver a “reusable value layer” of infrastructure and platforms that its departments can leverage, including data infrastructure emphasising data centralisation and standards to increase usability for AI solutions, integration with EHR, and the streamlining of IT procurement, roles, and processes.
For the workforce, HHS highlights its vision of having employees understand how to leverage AI in their roles, being supported by approved AI assistants in their day-to-day tasks, and to embrace a “try-first” culture. Its goals are to establish role-based training pathways, deploy approved AI tools, stand up service desks for common tasks and prompts, and recruit “top AI and data talent”. Power users will facilitate peer-to-peer learning, AI training will be formalised, and appropriate sharing of information will be encouraged through AI communities of practice. Research will be funded to benchmark AI use and configure standards for monitoring and evaluation of AI tools in the real world.
Elsewhere, HHS’s approach places AI as a supportive tool to enhance human decision-making, “without compromising the essential human touch”. Its goals cover the identification of priority conditions and public health issues that could be addressed with AI-enabled tools, promoting the adoption of AI tools for clinical decision support and proactive outreach, and supporting the continuous monitoring of AI to inform strategic decision-making.
Wider trend: AI
We were joined for a practical HTN Now webinar taking a deep dive into AI in health and care by expert panellists Peter Thomas, chief clinical information officer and director of digital development at Moorfields Eye Hospital; Sally Mole, senior digital programme manager – digital portfolio delivery team at The Dudley Group; and Ananya Datta, associate director of primary care digital delivery, South East London ICS. The session shared approaches, best practices, challenges, successes and learnings for the practical implementation of AI technologies across health and care, with our panel offering insight into current work, future plans, and ongoing collaborations in areas such as Ambient AI.
HTN was joined by a panel of experts from across the health sector for a focused webinar on the use of ambient scribe technology in NHS trusts. Panellists included Lauren Riddle, transformation programme manager at Hampshire and Isle of Wight Healthcare (HIoW); Ynez Symonds, CNIO at HIoW; Dom Pimenta, co-founder and CEO at Tortus AI; and Stuart Kyle, consultant rheumatologist and clinical lead for outpatient transformation at Royal Devon University Hospital. Our panel discussed the practicalities and considerations for ambient scribe implementations, from operating procedures and policies, integration and functionality, through to best practices around patient-practitioner interactions.
An AI Policy from Doncaster and Bassetlaw Teaching Hospitals (DBTH) NHS Foundation Trust has set out a guiding framework to ensure “the appropriate deployment, management, and oversight of AI systems across the DBTH partners”. Its scope covers all departments and services, and AI systems either internally developed or procured from external suppliers. In accordance with the policy, the SIRO will hold responsibility for the overall governance and management of information risks associated with AI systems, ensuring appropriate risk management processes, controls, and policies are in place, as well as addressing risks and adverse impacts.




