News

US Department of Health & Human Services AI strategic plan focuses on innovation, ethical use, access, organisational culture

The US Department of Health & Human Services has published its AI Strategic Plan, with a focus on four key domains: catalysing innovation in health AI and unlocking “new ways to improve people’s lives”; promoting trustworthy AI development and ethical and responsible use; “democratising” AI technologies and resources to promote access; and “cultivating AI-empowered workforces and organisation cultures” to ensure AI’s safe and effective use.

Broad actions to be taken in the domain of catalysing innovation and unlocking new ways to improve people’s lives include modernising infrastructure to support AI implementation and adoption, enhancing collaboration and public-private partnerships, clarifying regulatory oversight and coverage, and evidence gathering on outcomes of AI interventions and best practices.

For promoting trustworthy AI development and ensuring ethical and responsible AI use, the plan talks about the need to build and disseminate evidence “that supports mitigating risks to equity, biosecurity, data security, and privacy”, and to develop “clear standards” for the use of federal resources in support of trustworthy AI. It adds to support organisational governance for risk management and refine regulatory frameworks, and to promote external evaluation, monitoring and transparency reporting for “quality assurance of health AI”.

The department also looks at democratising AI technologies and access, supporting information sharing to help foster collaboration and improve access, developing “user friendly, customisable, and open-source AI tools”, and enhancing the capabilities of community organisations “including providing resources or other mechanisms where appropriate”.

Finally, the plan moves on to consider empowering the health workforce around the use of responsible AI, looking to improve training around the governance and management of AI, developing a “robust AI talent pipeline”, giving professionals access to resources required to support their roles with health organisations, and using AI to help “mitigate labour workforce shortages and address burnout and attrition”.

Also highlighted are actions to be taken within each of the HHS primary domains: medical research and discovery; medical product development, safety, and effectiveness; healthcare delivery; human services delivery; and public health, as well as for cybersecurity and critical infrastructure protection, and internal operations. These cover developing “AI-ready data standards and datasets to bolster their usability for AI-empowered medical research and discovery”; clarifying regulatory oversight of medical products; ensuring healthcare professionals have access to training, resources and research to support AI literacy; and more.

Considerations are made within the plan for the potential risks of AI associated with each of these primary domains, covering areas such as biosecurity risks, data security risks, “AI hijacking” or “the seizing control of agents or solutions to direct them toward harmful actions”, bias, lack of explainability, the “deskilling of researchers and investigators” as a result of automation, the potential to introduce safety risks, the potential to magnify patient trust concerns, and the potential for inappropriate application. A series of possible use cases and their associated risks are outlined for each of the aforementioned primary domains by way of illustration.

To read the AI Strategic Plan in full, please click here.

AI strategy from across the NHS

The National Institute for Health and Care Excellence (NICE) recently launched a new reporting standard designed to help improve the transparency and quality of cost-effectiveness studies of AI technologies, in a move it hopes will “help healthcare decision-makers understand the value of AI-enabled treatments” and offer patients “faster access to the most promising ones”.

The Department for Science, Innovation & Technology published a report examining the UK’s AI assurance market, looking at the current landscape, opportunities for future market growth, and potential government actions required to support this “emerging industry”. Informed by an industry survey, interviews with industry experts, focus groups with members of the public, and an economic analysis of the AI assurance market; the report ultimately finds that there are opportunities to drive the market forward by tackling challenges relating to demand, supply, interoperability, lack of understanding, lack of “quality infrastructure”, limited access to information, and a “fragmented” AI governance landscape.

And the UK Government’s AI Opportunities Action Plan accepted recommendations for expanding computing capacity, establishing AI growth zones, unlocking data assets, and sharing alongside a proposed delivery timeline.

Join HTN for a panel discussion looking at the practicalities of AI technologies, scheduled for 12 February 10:00 – 11:00. In the session, we’ll focus on the key considerations for implementation and adoption, and the role of data. We’ll share key learnings and best practices, along with real world examples of AI making a difference to patient outcomes and patient care. Looking to AI in health and care more generally, we’ll explore where AI can make the biggest impact, and how successful current policies and regulations are in encouraging AI innovation.