News

Audience views: biggest concerns for AI in healthcare, and the barriers to responsible AI

Following on from our recent panel discussion exploring whether the reality of AI can live up to current hype and how bias in healthcare data can be managed, we posed two follow-up questions to our audience to discern views around AI concerns and perspectives on barriers to responsible AI.

Firstly, we asked LinkedIn followers for their thoughts on the biggest concern for AI in healthcare: equitability, bias, transparency or regulation?

52 percent of over 100 voters highlighted regulation as their main concern, including roles such as patient safety and quality coordinator; digital project lead; innovation manager; and chief clinical information officer.

Following that with 21 percent of the vote was bias – here voters included cyber security analyst, digital director and CEO.

As a close third, voters such as a nurse teacher, project manager and associate director shared concerns about transparency in healthcare AI.

‘Equitability’ took eight percent, with voters coming in from roles such as GP trainee and digital health consultant.

Taking the votes on board, we posted a follow-up question: what do you think is the biggest barrier to responsible AI in healthcare? Voting options included inadequate regulation; bias; lack of standardised data; and issues with transparency.

Votes were closer than in the first poll, with the winning option – inadequate regulation – taking 38 percent whilst the next most popular choice – lack of standardised data – took 33 percent. For the former, we saw votes including practice manager, chief digital information officer and principal clinical safety officer; for the latter, programme manager, data manager and business intelligence analyst.

In third place, 16 percent of voters said that issues with transparency are the biggest barrier to responsible AI in healthcare. Votes came in from positions such as innovation consultant and deputy chief nurse.

And in fourth place, 13 percent of the vote went to the bias as the biggest barriers; votes here included nurse and GP.

What would you have voted? Don’t forget to follow HTN on LinkedIn for the opportunity to take part in future polls and questions.

AI in healthcare: the wider trend

We’ve covered lots of news on AI in this space in recent weeks. Let’s take a look at some of those stories.

Yesterday HTN explored AI use from four different NHS trusts, exploring how AI is being tested on its ability to identify cancerous abnormalities; predict cognitive disorders; support waiting list reductions; and help with developing personalised care plans. Click here to read more.

Earlier this month, we noted that AI Act, a legal framework seeking to address the risks of AI in Europe by setting out clear requirements and obligations in support of “trustworthy AI”, has officially entered into force.

In funding news, HTN reported that global healthcare AI company Huma Therapeutic Limited completed a Series D funding round with financing of over $80 million, launching the Huma Cloud Platform which is designed to help accelerate the adoption of digital and AI across care and research.

In other news from NHS trusts, we highlighted how Leeds Teaching Hospitals plans to deploy the enterprise AI platform from Newton’s Tree with the aim of supporting the trust to rapidly scale its ability to evaluate and implement AI applications. We also looked into the use of AI at Great Ormond Street Hospital, where a machine learning tool is being developed in the hopes of helping to predict Parkinson’s disease before onset of symptoms. At Tameside and Glossop, a skin cancer pilot has been launched featuring an AI platform designed to help triage and assess skin lesions for suspected cancer. From Cambridge University Hospitals, a funding grant of up to £365,000 has been awarded to explore how artificial intelligence and video can help identify gastric cancer at an earlier stage.

And from the University of Huddersfield, HTN covered the news that a secure threat intelligence sharing platform is being developed with the aim of helping to protect AI-enabled diagnostic tools from cyber attacks.