Insight

AI in research: proposals for informed consent, ChatGPT in diagnosis and treatment, and recommendations for tackling disparities

We’ve taken a look at three journals recently published in Frontiers on the topic of artificial intelligence in health, from an exploration of ChatGPT’s role in diagnosis and treatment, to the impact of AI on health disparities in research, to proposed guiding principles for the design of optimal informed consent for AI use in healthcare.

Seven proposals for informed consent on AI use in healthcare

In a journal published earlier this month, entitled ‘The ménage à trois of healthcare: the actors in after-AI era under patient consent’, researchers note that with AI becoming increasingly prevalent, its use in healthcare “will inevitably change clinical practice, the patient-caregiver relationship and the concept of the diagnosis and treatment pathway, affecting the balance between the patient’s right to self-determination and health, and thus leading to an evolution of the concept of informed consent.”

Through a review of keyboards on the main search engines and an analysis of the guidelines and regulations issued by scientific authorities and legal bodies on the use of AI in public health, the research team sought to characterise these guidelines and AI’s areas of application to propose their own guiding principles for optimal informed consent for AI use.

Following their review, the researchers state that advancements in technology along with cultural changes have led to the relationship between doctor and patient changing “radically”, from “a paternalistic type of relationship, in which the doctor stood as the sole holder of decision-making power in the care path of his patient, to a true therapeutic alliance between doctor and patient, in which the latter’s decision-making autonomy takes on a fundamental role, thus becoming an active part in the care decision-making process.” As such, they continue, for AI to integrate into the system of care, it must be accepted by patients s well as medical professionals; “therefore, its integration into the informed consent proposal is also essential.”

The researchers make seven key proposals for informed consent, taking into account the “triangular therapeutic alliance between physician, patient, and artificial intelligence”. These proposals are that the patient must understand what AI is and how it works; that the possibility of withdrawal of consent at any time must be guaranteed; that it must be defined “in which nodes AI intervention is proposed” with patients choosing whether to accept or reject it; that the role of AI within each node must be identified with breakdown of types of activities performed and level of autonomy in their management; and that the consequences of accepting or rejecting AI at each step are explicit. Additionally, the proposals state that patients should be accompanied during each medical act with explanation over which activities are performed by AI; and that “adequately trained” professionals should assist in drafting and administering consent and explanations of technical procedures, as well as providing assistance “in the case of ethical dilemmas”.

Citation: Saccà R, Turrini R, Ausania F, Turrina S and De Leo D (2024) The ménage à trois of healthcare: the actors in after-AI era under patient consent. Front. Med. 10:1329087. doi: 10.3389/fmed.2023.1329087

ChatGPT in diagnosis and treatment: applications and limitations 

Another journal published this month, ‘Rare and complex diseases in focus: ChatGPT’s role in improving diagnosis and treatment’, looks into the potential applications, limitations and ethical considerations of the AI tool with the aim of demonstrating how ChatGPT could potentially “contribute to better patient outcomes and enhance the healthcare system’s overall effectiveness.”

The authors note that the “emergence of artificial intelligence and natural language processing (NLP) technologies has opened up new possibilities for addressing the diagnostic and therapeutic challenges” associated with rare and complex diseases, which, due to their rarity or complexity, “are frequently misdiagnosed or remain undiagnosed for extended periods, leading to delayed or inadequate treatment.” They point out that AI and NLP technologies support rapid analysis of extensive volumes of medical literature and patient records; this, they argue, makes ChatGPT “a valuable assistant to physicians and researchers” as it can potentially facilitate access to information, exploration of treatments, and improved communication with patients.

On the applications and benefits of ChatGPT in diagnosis and treatment, the authors state that a “noteworthy capability” of ChatGPT lies in its ability to process textual descriptions and analyse patient symptoms “even when they are vague or unconventional” and generate diagnostic suggestions. Additionally, they add, its proficiency in quickly summarising “vast” amounts of literature can help physicians stay up-to-date with the latest research, and it can contribute personalised treatment plans by considering the data and medical history of individual patients, assisting in “tailoring treatment strategies that account for the unique characteristics of each rare disease case. The authors also argue that ChatGPT and other AI tech “holds promise in disease prediction and early screening” due to its ability to facilitate identification of pre-symptomatic signs of rare disease through analysis of genetic data, biomarkers and clinical information.

However, the authors emphasise the importance of tackling potential bias in training data and add that ChatGPT’s “reliance on patterns learned from training data may pose challenges in comprehending the nuanced contextual aspects of individual patient situations.” This underscores the need for human involvement, they add, in order to ensure “a more accurate and empathetic understanding”.

Citation: Zheng Y, Sun X, Feng B, Kang K, Yang Y, Zhao A and Wu Y (2024) Rare and complex diseases in focus: ChatGPT’s role in improving diagnosis and treatment. Front. Artif. Intell. 7:1338433. doi: 10.3389/frai.2024.1338433

AI in addressing health disparities 

Also published this month, an opinion article in Frontiers entitled ‘Accelerating health disparities research with artificial intelligence’ argues that “using AI to address health disparities is not just a simple choice but a scientific imperative”, with recommendations shared for the leveraging of AI in this area.

On how AI can perform as a tool to address health disparities, the authors comment that utilising AI tools can lead to healthcare professionals becoming “more efficient and comprehensive”, supporting the measurement and understanding of disparities as well as supporting the development of interventions. Through analysis of large datasets, they say, AI can “help us uncover hidden mechanisms and underpinnings of health disparities”, and is also placed to help “uncover  unexpected correlations and relationships that have remained unidentified in human-driven analyses, offering new insights.”

The authors note a number of challenges and limitations to AI and go on to share six recommendations for leveraging AI to tackle these challenges and address health disparities. Firstly, they write, researchers must advocate for the development and utilisation of more diverse datasets; secondly, “stringent” ethical guidelines and frameworks should be developed for designing AI algorithms. The third recommendation encourages greater diversity within AI research and development teams, with inclusive representation among stakeholders to better equip them to identify bias and disparities. Fourthly, the authors highlight the need to “formulate and implement clear ethical standards specific to AI applications in healthcare” which should focus on reducing bias, ensuring transparency and overseeing responsible development and deployment. The fifth recommendation centres around preserving human-centric healthcare, with the authors noting a need to place emphasis on AI’s “complementary” role as a decision-support tool. Finally, they call for continuous oversight and improvement, stating that protocols should be regularly audited and AI systems monitored to ensure that they “remain unbiased, transparent, and aligned to reduce health disparities”.

Citation: Green BL, Murphy A and Robinson E (2024) Accelerating health disparities research with artificial intelligence. Front. Digit. Health 6:1330160. doi: 10.3389/fdgth.2024.1330160