An AI Policy from Doncaster and Bassetlaw Teaching Hospitals (DBTH) NHS Foundation Trust has set out a guiding framework to ensure “the appropriate deployment, management, and oversight of AI systems across the DBTH partners”. Its scope covers all departments and services, and AI systems either internally developed or procured from external suppliers.
In accordance with the policy, the SIRO will hold responsibility for the overall governance and management of information risks associated with AI systems, ensuring appropriate risk management processes, controls, and policies are in place, as well as addressing risks and adverse impacts. The Caldicott Guardian will ensure data is processed in line with the Caldicott Principles, and the Trust Data Protection Officer will oversee compliance with data protection regulations, DPIAs, and data protection concerns.
Safety risks associated with the use of AI systems in clinical settings will be assessed by the CNIO and clinical safety officer, whilst responsibilities are also outlined for business intelligence, finance, procurement, and digital teams, research and innovation, and the head of organisational development.
On generative AI, it states it “can be used in many ways to enhance the work of DBTH”, but purpose and use must be clearly defined and agreed, including the reason for use, intended benefits, and how impact or value will be measured. Where possible, data should be anonymous, and care should be taken that an individual cannot be identified using the information provided.
“The combined details of a local area, a rare disease and a very young age may enable a patient to be identified. In such cases you would need to treat this as personal data and therefore identify a legal basis for the processing along with meeting the requirements of the common law duty of confidentiality,” the trust states. This also applies to data used in testing and developing AI systems.
Where using publicly available AI apps and services, staff should observe a number of rules set out in the policy, DBTH notes. These include not inputting personal or business sensitive data, limiting use to non-clinical purposes, and informing IG teams of intentions to use for routine working. The trust also shares expectations that staff using AI in this way should have awareness of potential ethical considerations “including the potential to propagate biased, discriminatory, or harmful content”, and of the need to verify outputs to ensure accuracy.
For procurement and implementation of AI products or systems, the trust highlights actions to be taken covering engagement with the digital and technical & business intelligence teams, DPIAs, DTAC, and approvals. AI outputs must be reviewed by an “appropriately qualified human”, there should be an agreed process in place to flag concerns around outputs, and incident response plans must be established to handle security incidents such as data breaches, it continues.
“You should conduct patient and public engagement activities that include determining if individuals support the use of data for your intended purpose, or if they have any concerns on how their data will be used,” DBTH states. “If the use of AI involves service change, then prior to the implementation of any AI programme, formal consultation must take place with employees and their trade union representatives in accordance with the organisational change policy.”
Wider trend: AI in health and care
We were joined for a practical HTN Now webinar taking a deep dive into AI in health and care by expert panellists Peter Thomas, chief clinical information officer and director of digital development at Moorfields Eye Hospital; Sally Mole, senior digital programme manager – digital portfolio delivery team at The Dudley Group; and Ananya Datta, associate director of primary care digital delivery, South East London ICS. The session shared approaches, best practices, challenges, successes and learnings for the practical implementation of AI technologies across health and care, with our panel offering insight into current work, future plans, and ongoing collaborations in areas such as Ambient AI.
In an update, Welsh Ambulance Services University NHS Trust has shared plans around Microsoft Copilot and trust AI use. Digital priorities for 2025/26 include an AI/innovation lab, emergency services network upgrades, smart stations, and national data resource collaboration. An AI policy and ethics panel is in development, an innovation lab has been launched, and the trust now has a mobile digital support hub for direct engagement with crews.
Cambridge University Hospitals NHS Foundation Trust has shared progress toward its aim of becoming “the most innovation-friendly trust in the country”, highlighting work around ambient AI, its digital front door, and eHospital. An AI steering group has been created, according to CUH, and a trust-wide AI policy has been published along with evaluation frameworks and usage guidelines. It has also launched the Newton’s Tree AI deployment platform to support AI rollout into live care and research settings, and developed a clinical data science unit to offer data pipelines and specialist analytics for research and innovation.




