News

HL7 shares guidance on standards-based AI and ML data lifecycle

Health Level Seven International (HL7) has published guidance on the artificial intelligence and machine learning data lifecycle, intended as an “informative document” to help developers in promoting the use of of standards to “improve the trust and quality of interoperable data used in AI models”.

The paper provides a range of case studies to supplement the guidance exploring the impact AI can have in healthcare decision-making, based around a range of patient scenarios with different conditions.

HL7 highlights anticipated benefits to the guidance including helping to ensure consistency in the collection, annotation and processing of data to ensure high quality and reliable training for machine learning models; enabling interoperability between systems and technologies; promoting transparency in the development and implementation of AI systems; and helping to ensure ethical considerations around AI.

Find out more here.

Artificial intelligence: the wider trend

Last month, we explored some recent updates from the NHS which share insight into how AI tools are being utilised and tested across the health service, looking in particular at examples from South Tyneside and Sunderland, Cambridgeshire and Peterborough, George Eliot, South Warwickshire and Doncaster and Bassetlaw.

Also in August, HTN hosted a panel discussion asking whether the reality of AI will live up to the current hype, as well as how bias in healthcare data can be managed. We were joined by Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at not-for-profit HealthAI. Click here to read their insights.

Ricardo joined us for an interview earlier in the year in which he discussed the potential and considerations for AI, key learnings from his career, and his work at HealthAI developing a global regulatory network for responsible AI in health, which seeks to ensure that each country has the tool to validate AI technologies in accordance with international standards.

Following the panel discussion, we asked our LinkedIn audience for thoughts on two key questions – what is the biggest concern for AI in healthcare, and what is the biggest barrier to responsible AI in healthcare?

Spotlight on data

On data, we covered a recent update from The British Medical Association and GPC England on NHS England plans to extract data from GP clinical systems on cloud-based telephony usage.

We also looked into data shared by South East Coast Ambulance Service on the implementation of automated texting with the aim of supporting patient safety and saving time for staff.

And at the end of August we reported on the virtual wards operational framework published by NHS England, designed to help support operational planning guidance around maintaining a virtual ward capacity of over 80 percent, and calling for “robust and consistent” data to be gathered on key topics.