A study exploring informed consent for ambient documentation using generative AI in outpatient care has highlighted nuances including that patients are more likely to self-censor when talking about mental and sexual health or illicit activity during consultations.
The study, published in Jama Network Open, was conducted from March to December 2024 in ambulatory practices across specialities in a “large urban academic health centre”, involving 18 clinicians and 103 patients in an operational proof-of-concept.
74.9 percent of patients reported being comfortable or very comfortable with the use of ambient documentation, with this rising to 81.6 percent when provided with basic information about the technology. However, when participants were given information on AI features, data storage, and corporate involvement, this decreased to 55.3 percent.
Patients were most comfortable with the use of ambient documentation in routine physical examinations, according to the study, with 63.1 percent reporting no change in behaviour with the use of the technology in this context. Conversely, patients highlighted that they would be more likely to self-censor when talking about mental health (35 percent), sexual health (40.8 percent), or illicit activity (51.5 percent).
Quotes gathered from patients echoed this reluctance to have information perceived as being sensitive recorded, with one participant stating: “If I tell my doctor my private problems and he records it, I would feel violated unless… the conversations on that device can be filtered/deleted.” Another said: “It’s not that I think the recorded information would get leaked, more that the information will permanently be on record somewhere.”
Authors note that clinicians shared similar concerns relating to the use of data and how it is stored or secured; were also worried about medico-legal risks such as liability; and worried AI might capture “unaddressed complaints” leading to missed diagnoses or legal exposure. Clinicians also acknowledged the potential impact on their relationships with patients, with most feeling rapport remained unchanged, but some commenting that patients may withhold sensitive information.
Quotes from clinicians included: “You are putting a human being in a situation where they are expected to be perfect… like a computer. Where I remember every little thing. That sounds horrible.” and “If the AI says every single complaint and you didn’t address one, then you may be held liable.”
When approaching consent, clinicians reported time constraints, concerns about bias when presenting the technology, and uncertainty about what details to include, the authors share. Patients wanted more information on data use, storage, and access, with 96.1 percent considering details on how audio was used and where it was sent to, to be very important; and 98.1 percent considering who would be able to access recordings to be very important. “Many patients wanted a clear opt-out option and had questions regarding future withdrawal of consent,” authors observed.
Citation: Lawrence KKuram VSLevine DL, et al. Informed Consent for Ambient Documentation Using Generative AI in Ambulatory Care. JAMA Netw Open. 2025;8(7):e2522400. DOI:10.1001/jamanetworkopen.2025.22400.
Wider trend: Health AI
HTN was joined for a deep dive into AI strategy, implementation, adoption, and opportunities by Neill Crump, group associate director of innovation & partnerships at The Dudley Group and Sandwell and West Birmingham, and Pip Hodgson, group digital transformation specialist at University Hospitals of Leicester (UHL) and Northamptonshire (UHN). Our panel discussed their organisation’s approaches to AI and AI strategy, best practices in AI strategy development, Ambient Voice Technology and successful implementation, and the opportunities likely to be ahead with the next wave of AI.
The World Health Organization (WHO) has published three recommendations on the use of AI in mental health and wellbeing, developed during an online workshop event bringing together more than 30 international experts in AI, mental health, ethics, and public policy. The event, held as a pre-summit event for the India AI Impact Summit 2026, was attended by researchers, clinicians, policy makers, and advocates, WHO explains. One of the topics discussed related to the potential risks and challenges around growing use of generative AI tools “neither designed nor tested for mental health”, particularly by young people.
An inquiry into personalised medicine and AI, along with an associated call for evidence, has been issued by the House of Lords Science and Technology Committee, seeking to explore possibilities, assess current regulatory frameworks, and understand how best to deploy proven innovations across the NHS. Specifically, the call for evidence seeks to gain insight into the most significant near-term opportunities for patients to benefit from personalised medicine and AI, where major gaps in understanding exist, what is needed to unlock opportunities, what role AI can play in accelerating the development and reducing the cost of personalised medicine, and where AI tools can be most effective in advancing personalised medicine.





