Manchester University NHS Foundation Trust (MFT) is to launch the next phase of its AI collaboration with Microsoft, looking to increase access for colleagues to Copilot and establish an MFT “Agent Factory” to support teams in designing and implementing AI tools to automate routine operational tasks.
The trust has already rolled out Dragon Copilot Ambient Voice Technology and 1,500 Microsoft 365 Copilot licences across a range of roles. Over the next three years, an additional 6,500 Copilot licences will be granted to MFT per year, reportedly enabling access for all corporate staff, and 1,600 frontline colleagues. Alongside this, the trust plans to invest in training and development to promote colleague confidence in the use of AI.
This next phase will also grant teams the ability to build and deploy AI agents to support processes in areas including admin, finance, and information governance, with MFT putting in place “appropriate” human-in-the-loop protections to ensure safe and responsible use.
AI agents are being used to support finance teams with forecasting and HR teams with responding to common queries and elements of recruitment, according to MFT, with the trust noting: “At the scale of MFT, even modest reductions in administrative workload have the potential to release significant time and improve operational efficiency across both clinical and corporate services.”
Trust chief executive Mark Cubbon talked about the potential for the collaboration to help streamline admin processes, reduce the potential for human error in high-volume tasks, and allow the reinvestment of time and resources in direct patient care. “Agentic AI is an important part of this next phase, and our early HR pilots suggest these tools could reduce the time spent on some administrative tasks by up to half,” he continued. “What matters most is introducing the tools responsibly, with the right safeguards in place, and with clinicians and staff closely involved in how they are used.”
Wider trend: Health AI
HTN was joined by a panel including Ciara Moore, EPR operations director at Bath, Salisbury and Great Western Group, Stuart Cooney, CTO at Royal Berkshire NHS Foundation Trust, and Julian Wiggins, healthcare solution director at Rackspace Technology, for a discussion focusing on cloud adoption, AI maturity, and cyber resilience. Panellists explored how healthcare organisations are tackling delivery, legacy systems, and rising digital expectations, and what this means for future strategy and plans. We also looked at the fragmented cloud landscape, integration pressures, legacy infrastructure, AI, and the growing urgency around cyber resilience, finishing by asking where NHS leaders should prioritise investment and focus in 2026.
A study exploring informed consent for ambient documentation using generative AI in outpatient care has highlighted nuances including that patients are more likely to self-censor when talking about mental and sexual health or illicit activity during consultations. The study, published in Jama Network Open, was conducted from March to December 2024 in ambulatory practices across specialities in a “large urban academic health centre”, involving 18 clinicians and 103 patients in an operational proof-of-concept.
The World Health Organization (WHO) has published three recommendations on the use of AI in mental health and wellbeing, developed during an online workshop event bringing together more than 30 international experts in AI, mental health, ethics, and public policy. The event, held as a pre-summit event for the India AI Impact Summit 2026, was attended by researchers, clinicians, policy makers, and advocates, WHO explains. One of the topics discussed related to the potential risks and challenges around growing use of generative AI tools “neither designed nor tested for mental health”, particularly by young people.






