Imperial College Healthcare NHS Trust has published its framework and approach to future-proof the trust’s adoption and utilisation of AI.
The framework outlines the aim of improving care for patients “from diagnosis to treatment to the administrative functions that keep our hospitals moving”; adding that it will be “vital” to ensure that AI technologies and usage remains “accessible to all”.
Professor Bob Klaber, Imperial College Healthcare’s director of strategy, research and innovation, noted the “huge potential of AI to transform healthcare”, outlining a strategic approach to using AI to “enhance the quality of care” through testing, evaluating, and piloting AI solutions.
As part of the new approach, a team of doctors, nurses, researchers, patients, and data experts have formed a trust steering group on AI, led by Professor Tim Orchard. The group are set to start working on a roadmap to help with maximising the potential of AI across the trust’s hospitals, and have already defined four focus areas of work where AI can help to solve key problems.
These focus areas are: delivery of clinical care, clinical and patient administration, corporate back office functions, and prediction and prevention. From those four focus areas, five workstreams have been established: AI research and implementation; data and analytics; education and training; leadership, communications and engagement; and AI strategy and implementation.
Klaber also shares guiding principles for the use of AI at the trust, including involving patients from the start in the design process; co-designing with staff to ensure their work can “effectively integrate with technology”; ensuring the equitability and accessibility of AI solutions; adopting AI which can “seamlessly integrate” with existing work processes; and utilising existing tools where possible.
According to Klaber, this approach is helping the trust to explore the potential for AI to help support multiple dimensions of health and care – “from automating routine administration to analysing scans and supporting diagnosis or extracting trends from large datasets to better understand and improve care”. An example, he offers, is Ambient AI, where the trust is “actively testing potential solutions” and working with industry partners to allow the technology to be adopted more widely.
The trust’s approach is also allowing for the development of new AI, Klaber continues, with one such project looking at the use of AI to predict health risks from ECGs, with researchers from the trust working alongside Imperial College London to utilise “very large sets of data consisting of millions of ECGs” to train an AI model to predict patients’ risk of developing disease or of early death. Trials are planned at the trust for 2025, with patients to be offered additional testing if their ECGs suggest future risk; “hopefully allowing clinicians to catch and intervene in disease more quickly”, he writes.
Microsoft also recently designated the trust’s partners, Paddington Life Sciences, as their AI hub, the update notes, whilst the Fleming Initiative established by the trust as part of that partnership, “will place AI at the heart of the battle against antimicrobial resistance”. The trust’s iCARE secure data environment, which Klaber states offers data insights to “hundreds of researchers each year who are working towards improving healthcare for all”, is also helping boost the trust’s experience in working “safely and effectively with large sets of data”.
By way of final comment, Bob says: “We are specifically looking at issues of equity, education and training for the future workforce in the safe use of AI and other digital tools, and at how we engage people with and communicate about this extraordinarily complex and fast-moving topic.” To read Klaber’s article in full, please click here.
Wider trend: AI innovation in healthcare
Somerset NHS Foundation Trust’s AI policy was recently shared by the trust’s chief scientist for data, operational research & artificial intelligence, focusing on the need for safe integration and an approach balancing innovation with ethical and legal responsibilities. Ensuring that the document is future-proofed, the policy outlines that whilst at present “even the best and most complex AI models (Siri, Alexa, Chat-GPT, etc.) do not surpass narrow AI…we must ensure protections are in place for future developments”.
Imperial College London recently teamed up with Edinburgh University to develop AI software reportedly capable of reading the brain scans of patients who have had a stroke. The AI algorithm has been trained to understand when a stroke happened and determine whether it can be successfully treated or reverse.
A HTN panel discussion in August looked at whether the reality of AI will live up to the current hype and managing bias in healthcare data, covering topics such as what good looks like for responsible AI, ensuring inclusive and equitable AI, and the deployment of AI in the NHS.
NICE launched a new reporting standard designed to help improve the transparency and quality of cost-effectiveness studies of AI technologies, in a move it hopes will “help healthcare decision-makers understand the value of AI-enabled treatments” and offer patients “faster access to the most promising ones”.
In October, we looked at AI use cases in the NHS, including in supporting diagnosis, personalising treatment, predicting disease, and more. We also covered DeepHealth’s acquisition of London-based cancer diagnostic company Kheiron Medical Technologies Limited, as part of efforts to expand its portfolio of AI-powered diagnostic and screening solutions; and University Hospitals Coventry and Warwickshire’s use of AI to improve patient experience.
To stay up to date with the latest in heath tech news, keep an eye on our upcoming events.