A systematic review of 27 studies on healthcare workers’ perceptions, experiences, and trust in AI clinical decision support systems (AI-CDSS) has highlighted eight key themes across explainability, training, stakeholder involvement, and human-centred design.
The review identified 27 studies published between 2020 and 2024, intended to reflect the advancements and increased interest in AI. A range of qualitative, quantitative, and mixed-methods studies were included based on their focus on examining aspects of trust or acceptance of AI amongst healthcare workers, and the Mixed Methods Appraisal Tool was utilised in assessing quality prior to extraction of key details. Sample sizes amongst studies varied from small focus groups to cohorts exceeding 1,000 individuals, and covered a wide range of healthcare providers such as physicians, nurses, nurse practitioners, GPs, pharmacists, and AI practitioners.
AI-CDSS tools represented in the review included machine learning models for sepsis treatment recommendations in intensive care units, ChatGPT-enhanced EHR alerts for medication optimisation in diabetes, tools for dermatological diagnosis, and systems for predicting lung cancer relapse.
Authors outlined factors influencing healthcare workers’ trust in AI-CDSS tools as described in the included studies, such as experience with the AI system, colleague recommendations, and results from randomised controlled trials. “Transparency, accuracy, and the reliability of AI recommendations were identified as critical, with recurring concerns about the “black-box” nature of algorithms and the lack of clarity regarding how insights are generated,” they state.
Other factors impacting trust related to perceived or actual risks, ease of use, organisational fit, and alignment with clinical judgement. The importance of proper training was emphasised, as was the presence of customisable features, the credibility of system developers, workload impact, acceptable error threshold, and medical liability. Whilst stakeholder involvement and familiarity were positive influences, concerns around job displacement and the dehumanisation of care were identified as challenges to building trust in AI-CDSS.
Eight thematic insights were subsequently highlighted: system transparency, training and familiarity, usability and seamless integration with clinical workflows, clinical reliability, credibility and validation, ethical consideration, human-centric design, and customisation and control. These themes were explored through enablers and barriers to healthcare workers’ trust, with prior system use, training, endorsements from colleagues, and observing system performance over time contributing “significantly” to trust building.
Barriers listed cover lack of transparency in AI algorithms, unclear recommendations, insufficient training, workflow disruption, perceived threat to professional autonomy, limited external validation, and concerns about accuracy. Poor generalisability to different clinical settings was also highlighted, as were medical liability concerns, fear of clinical errors, threats to patient-clinician relationships, and perceived risks around bias, job displacement, and ethics.
The authors proceed to outline a number of recommendations for improving trust in AI-CDSS, such as the use of interpretable algorithms, comprehensive and hands-on training to build user confidence and understanding, peer-led workshops to improve usability, the validation of systems through randomised trials, and testing across a variety of clinical settings. Legal responsibilities should be clarified, they note, whilst liability risk should be reduced through strong validation, ethical issues and bias should be addressed transparently, stakeholders should be involved in the design process, and AI should be designed to support, rather than replace, human judgement.
Citation: Tun HM, Rahman HA, Naing L, Malik OA, Trust in Artificial Intelligence–Based Clinical Decision Support Systems Among Health Care Workers: Systematic Review. J Med Internet Res 2025;27:e69678. URL: https://www.jmir.org/2025/1/e69678. doi: 10.2196/69678 PMID: 40772775
Wider trend: The safe and effective implementation of AI in health and care
For a HTN Now panel discussion on the reality of AI and managing bias in healthcare data, we were joined by panellists including Puja Myles, director at MHRA Clinical Practice Research Datalink; Shanker Vijayadeva, GP lead and digital transformation for the London region at NHS England; and Ricardo Baptista Leite, M.D., CEO at HealthAI, the global agency for responsible AI in health. The session explored topics including what is needed to manage bias; what responsible AI really looks like; how to ensure AI is inclusive and equitable; how AI can help support underserved populations; the deployment of AI in the NHS; and the potential to harness AI in supporting the shift from reactive to proactive care.
NHS Greater Glasgow and Clyde, NHS Lothian and AI evaluation company Aival have begun testing the technical performance of AI tools as part of a £1 million project looking at how well AI integrates with existing clinical systems and workflows.
Health Level Seven (HL7) has launched an AI Office, with the aim of setting up foundational standards around the use of safe and trustworthy AI when driving international transformation in healthcare. The AI Office is said to focus on four strategic workstreams, each of which is designed to make sure any emerging technologies are “trusted, explainable and interoperable”, as well as being scalable across clinical, operational and research settings on a global scale.
Join HTN and experts from across the health and care sector for a panel discussion on approaches to AI, policy, safety, regulation, and evaluation, scheduled for 27 August, 10-11am. The session will explore key focuses and challenges for the implementation of AI. To learn more, or to register, please click here.