1
|
Sumner B, Martin R, Gladman T, Wilkinson TJ, Grainger R. Understanding the gap: a balanced multi-perspective approach to defining essential digital health competencies for medical graduates. BMC MEDICAL EDUCATION 2025; 25:682. [PMID: 40346629 PMCID: PMC12065156 DOI: 10.1186/s12909-025-07194-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/03/2025] [Accepted: 04/18/2025] [Indexed: 05/11/2025]
Abstract
BACKGROUND Rapid technological advancements have left medical graduates potentially underprepared for the digital healthcare environment. Despite the importance of digital health education, consensus on essential primary medical degree content is lacking. Focusing on core competence domains can address critical skills while minimising additions to an already demanding curriculum. This study identifies the minimum essential digital health competency domains from the perspectives of learners, teachers, and content experts aiming to provide a framework for integrating digital health education into medical curricula. METHODS We conducted focus groups with students (n = 17), and semi-structured interviews with medical educators (n = 12) and digital sector experts (n = 11) using video conferencing. Participants were recruited using purposive sampling. The data were analysed using framework analysis and inductive thematic analysis to identify common themes. RESULTS Four core themes and eleven sub-themes were identified and aggregated into four essential competency domains: "Understand the Local Digital Health Ecosystem and Landscape", "Safe, Secure and Ethical Information Literacy and Management", "Proficiency in Digital Health Tools and Associated Technologies" and "Scholarly Research and Evidence-based Practice". Medical educator and digital sector expert participants provided the greatest source of data for curriculum content consideration. Students demonstrated varying levels of aptitude, confidence, and interest in technology. CONCLUSION Our balanced engagement with learners, educators, and digital health experts enabled the identification of a context-relevant framework for the minimum essential digital health competence domains for graduating medical students. The identification of focused, clinically relevant core competencies makes them amenable to integration into an existing curriculum tailored to local contexts. This approach addresses limitations of restricted curricular space and accommodates varying student interests, confidence and aptitude in technology. The delivery approach should consider a student-centred adaptive modality that takes advantage of advances in artificial intelligence (AI) as an effective pedagogical tool.
Collapse
Affiliation(s)
- Brett Sumner
- Department of Medicine, University of Otago Wellington, PO Box 7343, Newtown, Wellington, 6242, New Zealand
| | - Rachelle Martin
- Department of Medicine, University of Otago Wellington, PO Box 7343, Newtown, Wellington, 6242, New Zealand
| | - Tehmina Gladman
- Education Unit, University of Otago Wellington, Wellington, New Zealand
| | | | - Rebecca Grainger
- Department of Medicine, University of Otago Wellington, PO Box 7343, Newtown, Wellington, 6242, New Zealand.
- Education Unit, University of Otago Wellington, Wellington, New Zealand.
| |
Collapse
|
2
|
Peacock JG, Cole R, Duncan J, Jensen B, Snively B, Samuel A. Transforming Military Healthcare Education and Training: AI Integration for Future Readiness. Mil Med 2025:usaf169. [PMID: 40317230 DOI: 10.1093/milmed/usaf169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 01/18/2025] [Accepted: 04/15/2025] [Indexed: 05/07/2025] Open
Abstract
INTRODUCTION Artificial intelligence (AI) technologies have spread throughout the world and changed the way that many social functions are conducted, including health care. Future large-scale combat missions will likely require health care professionals to utilize AI tools among other tools in providing care for the Warfighter. Despite the need for an AI-capable health care force, medical education lacks an integration of medical AI knowledge. The purpose of this manuscript was to review ways that military health care education can be improved with an understanding of and using AI technologies. MATERIALS AND METHODS This article is a review of the literature regarding the integration of AI technologies in medicine and medical education. We do provide examples of quotes and images from a larger USU study on a Faculty Development program centered on learning about AI technologies in health care education. The study is not complete and is not the focus of this article, but was approved by the USU IRB. RESULTS Effective integration of AI technologies in military health care education requires military health care educators that are willing to learn how to safely, effectively, and ethically use AI technologies in their own administrative, educational, research, and clinical roles. Together with health care trainees, these faculties can help to build and co-create AI-integrated curricula that will accelerate and enhance the military health care curriculum of tomorrow. Trainees can begin to use generative AI tools, like large language models, to begin to develop their skills and practice the art of generating high-quality AI tools that will improve their studies and prepare them to improve military health care. Integration of AI technologies in the military health care environment requires close military-industry collaborations with AI and security experts to ensure personal and health care information security. Through secure cloud computing, blockchain technologies, and Application Programming Interfaces, among other technologies, military health care facilities and systems can safely integrate AI technologies to enhance patient care, clinical research, and health care education. CONCLUSIONS AI technologies are not a dream of the future, they are here, and they are being integrated and implemented in military health care systems. To best prepare the military health care professionals of the future for the reality of medical AI, we must reform military health care education through a combined effort of faculty, students, and industry partners.
Collapse
Affiliation(s)
- Justin G Peacock
- Department of Radiology, Uniformed Services University, Bethesda, MD 20814, USA
| | - Rebekah Cole
- Department of Military and Emergency Medicine, Uniformed Services University, Bethesda, MD 20814 United States
- Department of Health Professions Education, Uniformed Services University, Bethesda, MD 20814, United States
| | - Joshua Duncan
- Department of Preventive Medicine and Biostatistics, Uniformed Services University, Bethesda, MD 20814, United States
| | - Brandon Jensen
- School of Medicine, Uniformed Services University, Bethesda, MD 20814, United States
| | - Brad Snively
- School of Medicine, Uniformed Services University, Bethesda, MD 20814, United States
| | - Anita Samuel
- Department of Health Professions Education, Uniformed Services University, Bethesda, MD 20814, United States
| |
Collapse
|
3
|
Ballard DH, Antigua-Made A, Barre E, Edney E, Gordon EB, Kelahan L, Lodhi T, Martin JG, Ozkan M, Serdynski K, Spieler B, Zhu D, Adams SJ. Impact of ChatGPT and Large Language Models on Radiology Education: Association of Academic Radiology-Radiology Research Alliance Task Force White Paper. Acad Radiol 2025; 32:3039-3049. [PMID: 39616097 DOI: 10.1016/j.acra.2024.10.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 10/06/2024] [Accepted: 10/17/2024] [Indexed: 04/23/2025]
Abstract
Generative artificial intelligence, including large language models (LLMs), holds immense potential to enhance healthcare, medical education, and health research. Recognizing the transformative opportunities and potential risks afforded by LLMs, the Association of Academic Radiology-Radiology Research Alliance convened a task force to explore the promise and pitfalls of using LLMs such as ChatGPT in radiology. This white paper explores the impact of LLMs on radiology education, highlighting their potential to enrich curriculum development, teaching and learning, and learner assessment. Despite these advantages, the implementation of LLMs presents challenges, including limits on accuracy and transparency, the risk of misinformation, data privacy issues, and potential biases, which must be carefully considered. We provide recommendations for the successful integration of LLMs and LLM-based educational tools into radiology education programs, emphasizing assessment of the technological readiness of LLMs for specific use cases, structured planning, regular evaluation, faculty development, increased training opportunities, academic-industry collaboration, and research on best practices for employing LLMs in education.
Collapse
Affiliation(s)
- David H Ballard
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
| | | | - Emily Barre
- Duke University School of Medicine, Durham, North Carolina, USA
| | - Elizabeth Edney
- Department of Radiology, University of Nebraska Medical Center, Omaha, Nebraska, USA
| | - Emile B Gordon
- Department of Radiology, University of California San Diego, San Diego, California, USA
| | - Linda Kelahan
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Taha Lodhi
- Brody School of Medicine at East Carolina University, Greenville, North Carolina, USA
| | | | - Melis Ozkan
- University of Michigan Medical School, Ann Arbor, Michigan, USA
| | | | - Bradley Spieler
- Department of Radiology, Louisiana State University School of Medicine, University Medical Center, New Orleans, Louisiana, USA
| | - Daphne Zhu
- Duke University School of Medicine, Durham, North Carolina, USA
| | - Scott J Adams
- Department of Medical Imaging, Royal University Hospital, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada.
| |
Collapse
|
4
|
Tolentino R, Hersson-Edery F, Yaffe M, Abbasgholizadeh-Rahimi S. AIFM-ed Curriculum Framework for Postgraduate Family Medicine Education on Artificial Intelligence: Mixed Methods Study. JMIR MEDICAL EDUCATION 2025; 11:e66828. [PMID: 40279148 PMCID: PMC12064963 DOI: 10.2196/66828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 02/04/2025] [Accepted: 02/25/2025] [Indexed: 04/26/2025]
Abstract
BACKGROUND As health care moves to a more digital environment, there is a growing need to train future family doctors on the clinical uses of artificial intelligence (AI). However, family medicine training in AI has often been inconsistent or lacking. OBJECTIVE The aim of the study is to develop a curriculum framework for family medicine postgraduate education on AI called "Artificial Intelligence Training in Postgraduate Family Medicine Education" (AIFM-ed). METHODS First, we conducted a comprehensive scoping review on existing AI education frameworks guided by the methodological framework developed by Arksey and O'Malley and Joanna Briggs Institute methodological framework for scoping reviews. We adhered to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist for reporting the results. Next, 2 national expert panels were conducted. Panelists included family medicine educators and residents knowledgeable in AI from family medicine residency programs across Canada. Participants were purposively sampled, and panels were held via Zoom, recorded, and transcribed. Data were analyzed using content analysis. We followed the Standards for Reporting Qualitative Research for panels. RESULTS An integration of the scoping review results and 2 panel discussions of 14 participants led to the development of the AIFM-ed curriculum framework for AI training in postgraduate family medicine education with five key elements: (1) need and purpose of the curriculum, (2) learning objectives, (3) curriculum content, (4) organization of curriculum content, and (5) implementation aspects of the curriculum. CONCLUSIONS Using the results of this study, we developed the AIFM-ed curriculum framework for AI training in postgraduate family medicine education. This framework serves as a structured guide for integrating AI competencies into medical education, ensuring that future family physicians are equipped with the necessary skills to use AI effectively in their clinical practice. Future research should focus on the validation and implementation of the AIFM-ed framework within family medicine education. Institutions also are encouraged to consider adapting the AIFM-ed framework within their own programs, tailoring it to meet the specific needs of their trainees and health care environments.
Collapse
Affiliation(s)
- Raymond Tolentino
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Fanny Hersson-Edery
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Mark Yaffe
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Department of Family Medicine, St. Mary's Hospital Center, Integrated University Centre for Health and Social Services of West Island of Montreal, Montreal, QC, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
- Mila-Quebec, Montreal, QC, Canada
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC, Canada
| |
Collapse
|
5
|
Onetiu F, Bratu ML, Folescu R, Bratosin F, Bratu T. Assessing Medical Students' Perceptions of AI-Integrated Telemedicine: A Cross-Sectional Study in Romania. Healthcare (Basel) 2025; 13:990. [PMID: 40361768 PMCID: PMC12071906 DOI: 10.3390/healthcare13090990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2025] [Revised: 04/20/2025] [Accepted: 04/23/2025] [Indexed: 05/15/2025] Open
Abstract
BACKGROUND AND OBJECTIVES The rapid advancement of Artificial Intelligence (AI) has driven the expansion of telemedicine solutions worldwide, enabling remote diagnosis, patient monitoring, and treatment support. This study aimed to explore medical students' perceptions of AI in telemedicine, focusing on how these future physicians view AI's potential, benefits, and challenges. METHODS A cross-sectional survey was conducted among 161 Romanian medical students spanning Years 1 through 6. Participants completed a 15-item questionnaire covering demographic factors, prior exposure to AI, attitudes toward telemedicine, perceived benefits, and concerns related to ethical and data privacy issues. A questionnaire on digital health acceptance was conceived and integrated into the survey instrument. RESULTS Out of 161 respondents, 70 (43.5%) reported prior telemedicine use, and 66 (41.0%) indicated high familiarity (Likert scores ≥ 4) with AI-based tools. Fifth- and sixth-year students showed significantly greater acceptance of AI-driven telemedicine compared to first- and second-year students (p = 0.014). A moderate positive correlation (r = 0.44, p < 0.001) emerged between AI familiarity and telemedicine confidence, while higher data privacy concerns negatively affected acceptance (β = -0.20, p = 0.038). Gender differences were noted but did not reach consistent statistical significance in multivariate models. CONCLUSIONS Overall, Romanian medical students view AI-enhanced telemedicine favorably, particularly those in advanced academic years. Familiarity with AI technologies is a key driver of acceptance, though privacy and ethical considerations remain barriers. These findings underline the need for targeted curricular interventions to bolster AI literacy and address concerns regarding data security and clinical responsibility. By proactively integrating AI-related competencies, medical faculties can better prepare students for a healthcare landscape increasingly shaped by telemedicine.
Collapse
Affiliation(s)
- Florina Onetiu
- Doctoral School, “Victor Babes” University of Medicine and Pharmacy, Eftimie Murgu Square 2, 300041 Timisoara, Romania;
| | - Melania Lavinia Bratu
- Department of Neurosciences, “Victor Babes” University of Medicine and Pharmacy, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Roxana Folescu
- Discipline of Family Medicine, “Victor Babes” University of Medicine and Pharmacy, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Felix Bratosin
- Multidisciplinary Research Center for Infectious Diseases, “Victor Babes” University of Medicine and Pharmacy, Eftimie Murgu Square 2, 300041 Timisoara, Romania;
| | - Tiberiu Bratu
- Discipline of Plastic Surgery, “Victor Babes” University of Medicine and Pharmacy, Eftimie Murgu Square 2, 300041 Timisoara, Romania;
| |
Collapse
|
6
|
Tariq R, Dilmaghani S, Advani R, Soroush A, Berzin T, Khanna S. Perception and Understanding of Artificial Intelligence Among Gastroenterology Fellows and Early Career Gastroenterologists: A Nationwide Cross-Sectional Survey Study. Dig Dis Sci 2025. [DOI: 10.1007/s10620-025-09067-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Accepted: 04/14/2025] [Indexed: 05/03/2025]
|
7
|
Duan S, Liu C, Rong T, Zhao Y, Liu B. Integrating AI in medical education: a comprehensive study of medical students' attitudes, concerns, and behavioral intentions. BMC MEDICAL EDUCATION 2025; 25:599. [PMID: 40269824 PMCID: PMC12020173 DOI: 10.1186/s12909-025-07177-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Accepted: 04/14/2025] [Indexed: 04/25/2025]
Abstract
BACKGROUND To analyze medical students' perceptions, trust, and attitudes toward artificial intelligence (AI) in medical education, and explore their willingness to integrate AI in learning and teaching practices. METHODS This cross-sectional study was performed with undergraduate and postgraduate medical students from two medical universities in Beijing. Data were collected between October and early November 2024 via a self-designed questionnaire that covered seven main domains: Awareness of AI, Expectations and concerns about AI, Importance of AI in education, Potential challenges and risks of AI in education and learning, The role and potential of AI in education, Perceptions of generative AI, and Behavioral intentions and plans for AI use in medical education. RESULTS A total of 586 students participated in the survey, 553 valid responses were collected, giving an effective response rate of 94.4%. The majority of participants reported familiarity with AI concepts, whereas only 43.5% had an understanding of AI applications specific to medical education. Postgraduate students exhibited significantly higher levels of awareness of AI tools in medical contexts compared with undergraduate students (p < 0.001). Gender differences were also observed, with male students showing more enthusiasm and higher engagement with AI technologies than female students (p < 0.001). Female students expressed greater concerns regarding privacy, data security, and potential ethical issues related to AI in medical education than male students (p < 0.05). Male students or postgraduate students showed stronger behavioral intentions to integrate AI tools in their future learning and teaching practices. CONCLUSIONS Medical students exhibit optimistic yet cautious attitudes toward the application of AI in medical education. They acknowledge the potential of AI to enhance educational efficiency, but remain mindful of the associated privacy and ethical risks. Strengthening AI education and training and balancing technological advancements with ethical considerations will be crucial in facilitating the deep integration of AI in medical education. TRIAL REGISTRATION Not clinical trial.
Collapse
Affiliation(s)
- Shuo Duan
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China
| | - Chunyu Liu
- Peking University Peoples Hospital, Beijing, 100044, China
| | - Tianhua Rong
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China
| | - Yixin Zhao
- Peking University Peoples Hospital, Beijing, 100044, China.
| | - Baoge Liu
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China.
| |
Collapse
|
8
|
Sridhar GR, Yarabati V, Gumpeny L. Predicting outcomes using neural networks in the intensive care unit. World J Clin Cases 2025; 13:100966. [PMID: 40242225 PMCID: PMC11718574 DOI: 10.12998/wjcc.v13.i11.100966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 11/21/2024] [Accepted: 12/12/2024] [Indexed: 12/26/2024] Open
Abstract
Patients in intensive care units (ICUs) require rapid critical decision making. Modern ICUs are data rich, where information streams from diverse sources. Machine learning (ML) and neural networks (NN) can leverage the rich data for prognostication and clinical care. They can handle complex nonlinear relationships in medical data and have advantages over traditional predictive methods. A number of models are used: (1) Feedforward networks; and (2) Recurrent NN and convolutional NN to predict key outcomes such as mortality, length of stay in the ICU and the likelihood of complications. Current NN models exist in silos; their integration into clinical workflow requires greater transparency on data that are analyzed. Most models that are accurate enough for use in clinical care operate as 'black-boxes' in which the logic behind their decision making is opaque. Advances have occurred to see through the opacity and peer into the processing of the black-box. In the near future ML is positioned to help in clinical decision making far beyond what is currently possible. Transparency is the first step toward validation which is followed by clinical trust and adoption. In summary, NNs have the transformative ability to enhance predictive accuracy and improve patient management in ICUs. The concept should soon be turning into reality.
Collapse
Affiliation(s)
- Gumpeny R Sridhar
- Department of Endocrinology and Diabetes, Endocrine and Diabetes Centre, Visakhapatnam 530002, India
| | - Venkat Yarabati
- Chief Architect, Data and Insights, AGILISYS, London W127RZ, United Kingdom
| | - Lakshmi Gumpeny
- Department of Internal Medicine, Gayatri Vidya Parishad Institute of Healthcare and Medical Technology, Visakhapatnam 530048, India
| |
Collapse
|
9
|
Chassang G, Béranger J, Rial-Sebbag E. The Emergence of AI in Public Health Is Calling for Operational Ethics to Foster Responsible Uses. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2025; 22:568. [PMID: 40283793 PMCID: PMC12027014 DOI: 10.3390/ijerph22040568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2025] [Revised: 03/29/2025] [Accepted: 04/01/2025] [Indexed: 04/29/2025]
Abstract
This paper discusses the responsible use of artificial intelligence (AI) in public health and in medicine, and questions the development of AI ethics in international guidelines from a public health perspective. How can a global ethics approach help conceive responsible AI development and use for improving public health? By analysing key international guidelines in AI ethics (UNESCO, WHO, European High-Level Expert Group on AI) and the available literature, this paper advocates conceiving proper ethical and legal frameworks and implementation tools for AI in public health, based on a pragmatic risk-based approach. It highlights how ethical AI principles meet public health objectives and focuses on their value by addressing the meaning of human-centred innovations, transparency, accountability, diversity, equity, privacy protection, technical robustness, environmental protection, and post-marketing surveillance. It concludes that AI technology can reconcile individual and collective ethical approaches to public health, but requires specific legal frameworks and interdisciplinary efforts. Prospects include the development of supporting data infrastructures, of stakeholders' involvement to ensure long-term commitment and trust, of the public's and users' education, and of international organisations' capacity to coordinate and monitor AI developments. It formulates a proposal to reflect on an integrated transparent public health functionality in digital applications processing data.
Collapse
Affiliation(s)
- Gauthier Chassang
- CERPOP, Université de Toulouse, Inserm, UPS, 31000 Toulouse, France; (J.B.); (E.R.-S.)
- Genotoul Societal Platform, Ethics and Biosciences, GIS Genotoul Occitanie, 31000 Toulouse, France
- Unesco Chair, Ethics Science and Society (E2S), Working Group on Digital Ethics, 31000 Toulouse, France
| | - Jérôme Béranger
- CERPOP, Université de Toulouse, Inserm, UPS, 31000 Toulouse, France; (J.B.); (E.R.-S.)
- Unesco Chair, Ethics Science and Society (E2S), Working Group on Digital Ethics, 31000 Toulouse, France
| | - Emmanuelle Rial-Sebbag
- CERPOP, Université de Toulouse, Inserm, UPS, 31000 Toulouse, France; (J.B.); (E.R.-S.)
- Genotoul Societal Platform, Ethics and Biosciences, GIS Genotoul Occitanie, 31000 Toulouse, France
- Unesco Chair, Ethics Science and Society (E2S), Working Group on Digital Ethics, 31000 Toulouse, France
| |
Collapse
|
10
|
De-Giorgio F, Benedetti B, Mancino M, Sala E, Pascali VL. The need for balancing 'black box' systems and explainable artificial intelligence: A necessary implementation in radiology. Eur J Radiol 2025; 185:112014. [PMID: 40031377 DOI: 10.1016/j.ejrad.2025.112014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Revised: 02/23/2025] [Accepted: 02/24/2025] [Indexed: 03/05/2025]
Abstract
Radiology is one of the medical specialties most significantly impacted by Artificial Intelligence (AI). AI systems, particularly those employing machine and deep learning, excel in processing large datasets and comparing images from similar contexts, fulfilling radiological demands. However, the implementation of AI in radiology presents notable challenges, including concerns about data privacy, informed consent, and the potential for external interferences affecting decision-making processes. Biases represent another critical issue, often stemming from unrepresentative datasets or inadequate system training, which can lead to distorted outcomes and exacerbate healthcare inequalities. Additionally, generative AI systems may produce 'hallucinations' arising from their reliance on probabilistic modeling without the ability to distinguish between true and false information. Such risks raise ethical and legal questions, especially when AI-induced errors harm patient health. Concerning liability for medical errors involving AI, healthcare professionals currently retain full accountability for their decisions. AI systems remain tools to support, not replace, human expertise and judgment. Nevertheless, the "black box" nature of many AI models - wherein the reasoning behind outputs remains opaque - limits the possibility of fully informed consent. We advocate for prioritizing Explainable Artificial Intelligence (XAI) in radiology. While potentially less performant than black-box models, XAI enhances transparency, allowing patients to understand how their data is used and how AI influences clinical decisions, aligning with ethical standards.
Collapse
Affiliation(s)
- Fabio De-Giorgio
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Healthcare Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Beatrice Benedetti
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Healthcare Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Matteo Mancino
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Evis Sala
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Vincenzo L Pascali
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Department of Healthcare Surveillance and Bioethics, Section of Legal Medicine, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
11
|
Almalki M, Alkhamis MA, Khairallah FM, Choukou MA. Perceived artificial intelligence readiness in medical and health sciences education: a survey study of students in Saudi Arabia. BMC MEDICAL EDUCATION 2025; 25:439. [PMID: 40140763 PMCID: PMC11938701 DOI: 10.1186/s12909-025-06995-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2024] [Accepted: 03/12/2025] [Indexed: 03/28/2025]
Abstract
BACKGROUND As artificial intelligence (AI) becomes increasingly integral to healthcare, preparing medical and health sciences students to engage with AI technologies is critical. OBJECTIVES This study investigates the perceived AI readiness of medical and health sciences students in Saudi Arabia, focusing on four domains: cognition, ability, vision, and ethical perspectives, using the Medical Artificial Intelligences Readiness Scale for Medical Students (MAIRS-MS). METHODS A cross-sectional survey was conducted between October and November 2023, targeting students from various universities and medical schools in Saudi Arabia. A total of 1,221 students e-consented to participate. Data were collected via a 20-minute Google Form survey, incorporating a 22-item MAIRS-MS scale. Descriptive and multivariate statistical analyses were performed using Stata version 16.0. Cronbach alpha was calculated to ensure reliability, and least squares linear regression was used to explore relationships between students' demographics and their AI readiness scores. RESULTS The overall mean AI readiness score was 62 out of 110, indicating a moderate level of readiness. Domain-specific scores revealed generally consistent levels of readiness: cognition (58%, 23.2/40), ability (57%, 22.8/40), vision (54%, 8.1/15) and ethics (57%, 8.5/15). Nearly 44.5% of students believed AI-related courses should be mandatory whereas only 41% reported having such a required course in their program. CONCLUSIONS Medical and health sciences students in Saudi Arabia demonstrate moderate AI readiness across cognition, ability, vision, and ethics, indicating both a solid foundation and areas for growth. Enhancing AI curricula and emphasizing practical, ethical, and forward-thinking skills can better equip future healthcare professionals for an AI-driven future.
Collapse
Affiliation(s)
- Manal Almalki
- Department of Public Health, College of Nursing and Health Sciences, Jazan University, Jazan, 45142, Saudi Arabia
| | - Moh A Alkhamis
- Department of Epidemiology and Biostatistics, Faculty of Public Health, Health Sciences Centre, Kuwait University, Kuwait City, Kuwait
| | - Farah M Khairallah
- Department of Epidemiology and Biostatistics, Faculty of Public Health, Health Sciences Centre, Kuwait University, Kuwait City, Kuwait
| | - Mohamed-Amine Choukou
- Department of Occupational Therapy, College of Rehabilitation Sciences, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, Manitoba, Canada.
| |
Collapse
|
12
|
Monzon N, Hays FA. Leveraging Generative Artificial Intelligence to Improve Motivation and Retrieval in Higher Education Learners. JMIR MEDICAL EDUCATION 2025; 11:e59210. [PMID: 40068170 PMCID: PMC11918979 DOI: 10.2196/59210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 11/11/2024] [Accepted: 01/02/2025] [Indexed: 02/07/2025]
Abstract
Unlabelled Generative artificial intelligence (GenAI) presents novel approaches to enhance motivation, curriculum structure and development, and learning and retrieval processes for both learners and instructors. Though a focus for this emerging technology is academic misconduct, we sought to leverage GenAI in curriculum structure to facilitate educational outcomes. For instructors, GenAI offers new opportunities in course design and management while reducing time requirements to evaluate outcomes and personalizing learner feedback. These include innovative instructional designs such as flipped classrooms and gamification, enriching teaching methodologies with focused and interactive approaches, and team-based exercise development among others. For learners, GenAI offers unprecedented self-directed learning opportunities, improved cognitive engagement, and effective retrieval practices, leading to enhanced autonomy, motivation, and knowledge retention. Though empowering, this evolving landscape has integration challenges and ethical considerations, including accuracy, technological evolution, loss of learner's voice, and socioeconomic disparities. Our experience demonstrates that the responsible application of GenAI's in educational settings will revolutionize learning practices, making education more accessible and tailored, producing positive motivational outcomes for both learners and instructors. Thus, we argue that leveraging GenAI in educational settings will improve outcomes with implications extending from primary through higher and continuing education paradigms.
Collapse
Affiliation(s)
- Noahlana Monzon
- Department of Nutritional Sciences, University of Oklahoma Health Sciences, 1200 N Stonewall Ave, 3064 Allied Health Building, Oklahoma City, OK, 73117, United States, 1 405 2718001 ext 41182
| | - Franklin Alan Hays
- Department of Nutritional Sciences, University of Oklahoma Health Sciences, 1200 N Stonewall Ave, 3064 Allied Health Building, Oklahoma City, OK, 73117, United States, 1 405 2718001 ext 41182
| |
Collapse
|
13
|
Bennis I, Mouwafaq S. Advancing AI-driven thematic analysis in qualitative research: a comparative study of nine generative models on Cutaneous Leishmaniasis data. BMC Med Inform Decis Mak 2025; 25:124. [PMID: 40065373 PMCID: PMC11895178 DOI: 10.1186/s12911-025-02961-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 03/03/2025] [Indexed: 03/14/2025] Open
Abstract
BACKGROUND As part of qualitative research, the thematic analysis is time-consuming and technical. The rise of generative artificial intelligence (A.I.), especially large language models, has brought hope in enhancing and partly automating thematic analysis. METHODS The study assessed the relative efficacy of conventional against AI-assisted thematic analysis when investigating the psychosocial impact of cutaneous leishmaniasis (CL) scars. Four hundred forty-eight participant responses from a core study were analysed comparing nine A.I. generative models: Llama 3.1 405B, Claude 3.5 Sonnet, NotebookLM, Gemini 1.5 Advanced Ultra, ChatGPT o1-Pro, ChatGPT o1, GrokV2, DeepSeekV3, Gemini 2.0 Advanced with manual expert analysis. Jamovi software maintained methodological rigour through Cohen's Kappa coefficient calculations for concordance assessment and similarity measurement via Python using Jaccard index computations. RESULTS Advanced A.I. models showed impressive congruence with reference standards; some even had perfect concordance (Jaccard index = 1.00). Gender-specific analyses demonstrated consistent performance across subgroups, allowing a nuanced understanding of psychosocial consequences. The grounded theory process developed the framework for the fragile circle of vulnerabilities that incorporated new insights into CL-related psychosocial complexity while establishing novel dimensions. CONCLUSIONS This study shows how A.I. can be incorporated in qualitative research methodology, particularly in complex psychosocial analysis. Consequently, the A.I. deep learning models proved to be highly efficient and accurate. These findings imply that the future directions for qualitative research methodology should focus on maintaining analytical rigour through the utilisation of technology using a combination of A.I. capabilities and human expertise following standardised future checklist of reporting full process transparency.
Collapse
Affiliation(s)
- Issam Bennis
- Mohammed VI International School of Public Health, Mohammed VI University of Sciences and Health, Casablanca, Morocco.
| | - Safwane Mouwafaq
- Mohammed VI International School of Public Health, Mohammed VI University of Sciences and Health, Casablanca, Morocco
| |
Collapse
|
14
|
Hussain MS, Ramalingam PS, Chellasamy G, Yun K, Bisht AS, Gupta G. Harnessing Artificial Intelligence for Precision Diagnosis and Treatment of Triple Negative Breast Cancer. Clin Breast Cancer 2025:S1526-8209(25)00052-7. [PMID: 40158912 DOI: 10.1016/j.clbc.2025.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Revised: 01/24/2025] [Accepted: 03/04/2025] [Indexed: 04/02/2025]
Abstract
Triple-Negative Breast Cancer (TNBC) is a highly aggressive subtype of breast cancer (BC) characterized by the absence of estrogen, progesterone, and HER2 receptors, resulting in limited therapeutic options. This article critically examines the role of Artificial Intelligence (AI) in enhancing the diagnosis and treatment of TNBC treatment. We begin by discussing the incidence of TNBC and the fundamentals of precision medicine, emphasizing the need for innovative diagnostic and therapeutic approaches. Current diagnostic methods, including conventional imaging techniques and histopathological assessments, exhibit limitations such as delayed diagnosis and interpretative discrepancies. This article highlights AI-driven advancements in image analysis, biomarker discovery, and the integration of multi-omics data, leading to enhanced precision and efficiency in diagnosis and treatment. In treatment, AI facilitates personalized therapeutic strategies, accelerates drug discovery, and enables real-time monitoring of patient responses. However, challenges persist, including issues related to data quality, model interpretability, and the societal impact of AI implementation. In the conclusion, we discuss the future prospects of integrating AI into clinical practice and emphasize the importance of multidisciplinary collaboration. This review aims to outline key trends and provide recommendations for utilizing AI to improve TNBC management outcomes, while highlighting the need for further research.
Collapse
Affiliation(s)
- Md Sadique Hussain
- Uttaranchal Institute of Pharmaceutical Sciences, Uttaranchal University, Dehradun, Uttarakhand, India.
| | - Prasanna Srinivasan Ramalingam
- Protein Engineering lab, School of Biosciences and Technology, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Gayathri Chellasamy
- Department of Bionanotechnology, Gachon University, Gyeonggi-do, South Korea
| | - Kyusik Yun
- Department of Bionanotechnology, Gachon University, Gyeonggi-do, South Korea
| | - Ajay Singh Bisht
- School of Pharmaceutical Sciences, Shri Guru Ram Rai University, Dehradun, Uttarakhand, India
| | - Gaurav Gupta
- Centre for Research Impact & Outcome-Chitkara College of Pharmacy, Chitkara University, Punjab, India; Centre of Medical and Bio-allied Health Sciences Research, Ajman University, Ajman, United Arab Emirates
| |
Collapse
|
15
|
Stroud AM, Anzabi MD, Wise JL, Barry BA, Malik MM, McGowan ML, Sharp RR. Toward Safe and Ethical Implementation of Health Care Artificial Intelligence: Insights From an Academic Medical Center. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2025; 3:100189. [PMID: 40206995 PMCID: PMC11975832 DOI: 10.1016/j.mcpdig.2024.100189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Claims abound that advances in artificial intelligence (AI) will permeate virtually every aspect of medicine and transform clinical practice. Simultaneously, concerns about the safety and equity of health care AI have prompted ethical and regulatory scrutiny from multiple oversight bodies. Positioned at the intersection of these perspectives, academic medical centers (AMCs) are charged with navigating the safe and responsible implementation of health care AI. Decisions about the use of AI at AMCs are complicated by uncertainties regarding the risks posed by these technologies and a lack of consensus on best practices for managing these risks. In this article, we highlight several potential harms that may arise in the adoption of health care AI, with a focus on risks to patients, clinicians, and medical practice. In addition, we describe several strategies that AMCs might adopt now to address concerns about the safety and ethical uses of health care AI. Our analysis aims to support AMCs as they seek to balance AI innovation with proactive oversight.
Collapse
Affiliation(s)
| | | | - Journey L. Wise
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN
| | - Barbara A. Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN
| | | | | | | |
Collapse
|
16
|
Cunha Reis T. Artificial intelligence and natural language processing for improved telemedicine: Before, during and after remote consultation. Aten Primaria 2025; 57:103228. [PMID: 39955812 PMCID: PMC11872648 DOI: 10.1016/j.aprim.2025.103228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Accepted: 12/13/2024] [Indexed: 02/18/2025] Open
Abstract
The rapid evolution of telemedicine has revealed significant documentation and workflow challenges. Clinicians often struggle with the administrative burdens of telehealth visits, sacrificing valuable time better spent in direct patient interaction. This issue is further compounded by the need to maintain accurate and comprehensive records, which can be time-consuming and prone to error when approached manually. In this context, integrating artificial intelligence (AI) and natural language processing (NLP) technologies presents a transformative opportunity. Automating documentation and enhancing workflow efficiency can revolutionize healthcare delivery, alleviating clinician workloads and improving clinical quality and patient safety. Therefore, examining the application of these cutting-edge technologies becomes imperative in addressing the pressing needs of modern healthcare and optimizing health outcomes. The significance of integrating AI and NLP technologies in clinical remote practice cannot be overstated. Hence, this article aims to inspire and motivate healthcare professionals to embrace these transformative changes.
Collapse
Affiliation(s)
- Tiago Cunha Reis
- Universidade de Lisboa, Faculdade de Medicina, Avenida Professor Egas Moniz, 1649-028 Lisboa, Portugal.
| |
Collapse
|
17
|
Ichikawa T, Olsen E, Vinod A, Glenn N, Hanna K, Lund GC, Pierce-Talsma S. Generative Artificial Intelligence in Medical Education-Policies and Training at US Osteopathic Medical Schools: Descriptive Cross-Sectional Survey. JMIR MEDICAL EDUCATION 2025; 11:e58766. [PMID: 39934984 PMCID: PMC11835596 DOI: 10.2196/58766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 10/09/2024] [Accepted: 01/02/2025] [Indexed: 02/13/2025]
Abstract
Background Interest has recently increased in generative artificial intelligence (GenAI), a subset of artificial intelligence that can create new content. Although the publicly available GenAI tools are not specifically trained in the medical domain, they have demonstrated proficiency in a wide range of medical assessments. The future integration of GenAI in medicine remains unknown. However, the rapid availability of GenAI with a chat interface and the potential risks and benefits are the focus of great interest. As with any significant medical advancement or change, medical schools must adapt their curricula to equip students with the skills necessary to become successful physicians. Furthermore, medical schools must ensure that faculty members have the skills to harness these new opportunities to increase their effectiveness as educators. How medical schools currently fulfill their responsibilities is unclear. Colleges of Osteopathic Medicine (COMs) in the United States currently train a significant proportion of the total number of medical students. These COMs are in academic settings ranging from large public research universities to small private institutions. Therefore, studying COMs will offer a representative sample of the current GenAI integration in medical education. Objective This study aims to describe the policies and training regarding the specific aspect of GenAI in US COMs, targeting students, faculty, and administrators. Methods Web-based surveys were sent to deans and Student Government Association (SGA) presidents of the main campuses of fully accredited US COMs. The dean survey included questions regarding current and planned policies and training related to GenAI for students, faculty, and administrators. The SGA president survey included only those questions related to current student policies and training. Results Responses were received from 81% (26/32) of COMs surveyed. This included 47% (15/32) of the deans and 50% (16/32) of the SGA presidents (with 5 COMs represented by both the deans and the SGA presidents). Most COMs did not have a policy on the student use of GenAI, as reported by the dean (14/15, 93%) and the SGA president (14/16, 88%). Of the COMs with no policy, 79% (11/14) had no formal plans for policy development. Only 1 COM had training for students, which focused entirely on the ethics of using GenAI. Most COMs had no formal plans to provide mandatory (11/14, 79%) or elective (11/15, 73%) training. No COM had GenAI policies for faculty or administrators. Eighty percent had no formal plans for policy development. Furthermore, 33.3% (5/15) of COMs had faculty or administrator GenAI training. Except for examination question development, there was no training to increase faculty or administrator capabilities and efficiency or to decrease their workload. Conclusions The survey revealed that most COMs lack GenAI policies and training for students, faculty, and administrators. The few institutions with policies or training were extremely limited in scope. Most institutions without current training or policies had no formal plans for development. The lack of current policies and training initiatives suggests inadequate preparedness for integrating GenAI into the medical school environment, therefore, relegating the responsibility for ethical guidance and training to the individual COM member.
Collapse
Affiliation(s)
- Tsunagu Ichikawa
- College of Osteopathic Medicine, University of New England, 11 Hills Beach Road, Biddeford, ME, 04005, United States, 1 2076022880
| | - Elizabeth Olsen
- College of Osteopathic Medicine, Rocky Vista University, Parker, CO, United States
| | - Arathi Vinod
- College of Osteopathic Medicine, Touro University California, Vallejo, CA, United States
| | - Noah Glenn
- McCombs School of Business, University of Texas at Austin, Austin, TX, United States
| | - Karim Hanna
- Morsani College of Medicine, University of South Florida, Tampa, FL, United States
| | | | - Stacey Pierce-Talsma
- College of Osteopathic Medicine, University of New England, 11 Hills Beach Road, Biddeford, ME, 04005, United States, 1 2076022880
| |
Collapse
|
18
|
Okamoto S, Kataoka M, Itano M, Sawai T. AI-based medical ethics education: examining the potential of large language models as a tool for virtue cultivation. BMC MEDICAL EDUCATION 2025; 25:185. [PMID: 39910559 PMCID: PMC11796193 DOI: 10.1186/s12909-025-06801-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Accepted: 01/31/2025] [Indexed: 02/07/2025]
Abstract
BACKGROUND With artificial intelligence (AI) increasingly revolutionising medicine, this study critically evaluates the integration of large language models (LLMs), known for advanced text processing and generation capabilities, in medical ethics education, focusing on promoting virtue. Positing LLMs as central to mimicking nuanced human communication, it examines their use in medical education and the ethicality of embedding AI in such contexts. METHOD Using a hybrid approach that combines principlist and non-principlist methodologies, we position LLMs as exemplars and advisors. RESULTS We discuss the imperative for including AI ethics in medical curricula and its utility as an educational tool, identify the lack of educational resources in medical ethics education, and advocate for future LLMs to mitigate this problem as a "second-best" tool. We also emphasise the critical importance of instilling virtue in medical ethics education and illustrate how LLMs can effectively impart moral knowledge and model virtue cultivation. We address expected counter-arguments to using LLMs in this area and explain their profound potential to enrich medical ethics education, including facilitating the acquisition of moral knowledge and developing ethically grounded practitioners. CONCLUSIONS The study involved a comprehensive exploration of the function of LLMs in medical ethics education, positing that tools such as ChatGPT can profoundly enhance the learning experience in the future. This is achieved through tailored, interactive educational encounters while addressing the ethical nuances of their use in educational settings.
Collapse
Affiliation(s)
- Shimpei Okamoto
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashi-Hiroshima, Japan
| | - Masanori Kataoka
- Uehiro Division for Applied Ethics, Graduate School of Humanities and Social Sciences, Hiroshima University, Higashi-Hiroshima, Japan
| | - Makoto Itano
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashi-Hiroshima, Japan
| | - Tsutomu Sawai
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashi-Hiroshima, Japan.
- Uehiro Division for Applied Ethics, Graduate School of Humanities and Social Sciences, Hiroshima University, Higashi-Hiroshima, Japan.
- Institute for the Advanced Study of Human Biology (ASHBi), Kyoto University, Kyoto, Japan.
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
19
|
Ogundiya O, Rahman TJ, Valnarov-Boulter I, Young TM. Looking Back on Digital Medical Education Over the Last 25 Years and Looking to the Future: Narrative Review. J Med Internet Res 2024; 26:e60312. [PMID: 39700490 PMCID: PMC11695957 DOI: 10.2196/60312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 09/05/2024] [Accepted: 12/04/2024] [Indexed: 12/21/2024] Open
Abstract
BACKGROUND The last 25 years have seen enormous progression in digital technologies across the whole of the health service, including health education. The rapid evolution and use of web-based and digital techniques have been significantly transforming this field since the beginning of the new millennium. These advancements continue to progress swiftly, even more so after the COVID-19 pandemic. OBJECTIVE This narrative review aims to outline and discuss the developments that have taken place in digital medical education across the defined time frame. In addition, evidence for potential opportunities and challenges facing digital medical education in the near future was collated for analysis. METHODS Literature reviews were conducted using PubMed, Web of Science Core Collection, Scopus, Google Scholar, and Embase. The participants and learners in this study included medical students, physicians in training or continuing professional development, nurses, paramedics, and patients. RESULTS Evidence of the significant steps in the development of digital medical education in the past 25 years was presented and analyzed in terms of application, impact, and implications for the future. The results were grouped into the following themes for discussion: learning management systems; telemedicine (in digital medical education); mobile health; big data analytics; the metaverse, augmented reality, and virtual reality; the COVID-19 pandemic; artificial intelligence; and ethics and cybersecurity. CONCLUSIONS Major changes and developments in digital medical education have occurred from around the start of the new millennium. Key steps in this journey include technical developments in teleconferencing and learning management systems, along with a marked increase in mobile device use for accessing learning over this time. While the pace of evolution in digital medical education accelerated during the COVID-19 pandemic, further rapid progress has continued since the resolution of the pandemic. Many of these changes are currently being widely used in health education and other fields, such as augmented reality, virtual reality, and artificial intelligence, providing significant future potential. The opportunities these technologies offer must be balanced against the associated challenges in areas such as cybersecurity, the integrity of web-based assessments, ethics, and issues of digital privacy to ensure that digital medical education continues to thrive in the future.
Collapse
Affiliation(s)
| | | | - Ioan Valnarov-Boulter
- Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Tim Michael Young
- Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
20
|
Regmi A, Mao X, Qi Q, Tang W, Yang K. Students' perception and self-efficacy in blended learning of medical nutrition course: a mixed-method research. BMC MEDICAL EDUCATION 2024; 24:1411. [PMID: 39627743 PMCID: PMC11616338 DOI: 10.1186/s12909-024-06339-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 11/13/2024] [Indexed: 12/06/2024]
Abstract
BACKGROUND The blended teaching mode, which combines online and offline learning, has gained significant traction in higher education. This study aims to explore the impact of blended learning on students' academic performance, engagement, and self-efficacy in a medical nutrition course. METHODS A mixed-method research design was employed, involving 110 undergraduate students enrolled in a blended learning medical nutrition course and a control group of 93 students from a traditional learning environment. Data collection included academic performance assessments, semi-structured interviews, and an anonymous questionnaire. Quantitative data were analyzed using t-tests and chi-square tests, while qualitative data were subjected to thematic analysis. RESULTS Students in the blended learning group demonstrated significantly higher self-efficacy, particularly in organizing their study plans, participating in interactive learning activities, and applying course knowledge. Academic performance was notably better in collaborative assessments, such as group discussions and exploratory projects, in the blended learning group compared to the control group. Qualitative analysis revealed that students appreciated the flexibility and engagement offered by the blended learning model, although they also faced challenges related to self-discipline and the learning environment. CONCLUSIONS The blended learning approach enhances student engagement, self-efficacy, and collaborative skills, particularly in group-based assessments. While students benefit from the flexibility and richness of learning resources, challenges related to self-discipline and learning environments need to be addressed to optimize the effectiveness of blended learning.
Collapse
Affiliation(s)
- Aksara Regmi
- Department of Clinical Nutrition, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Department of Clinical Nutrition, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
| | - Xuanxia Mao
- Department of Clinical Nutrition, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Department of Clinical Nutrition, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
| | - Qi Qi
- Department of Clinical Nutrition, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Department of Clinical Nutrition, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
| | - Wenjing Tang
- Department of Clinical Nutrition, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
- Department of Clinical Nutrition, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Kefeng Yang
- Department of Clinical Nutrition, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
- Department of Clinical Nutrition, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| |
Collapse
|
21
|
Silver JK, Dodurgali MR, Gavini N. Artificial Intelligence in Medical Education and Mentoring in Rehabilitation Medicine. Am J Phys Med Rehabil 2024; 103:1039-1044. [PMID: 39016292 DOI: 10.1097/phm.0000000000002604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
Artificial intelligence emerges as a transformative force, offering novel solutions to enhance medical education and mentorship in the specialty of physical medicine and rehabilitation. Artificial intelligence is a transformative technology that is being adopted in nearly every industry. In medicine, the use of artificial intelligence in medical education is growing. Artificial intelligence may also assist with some of the challenges of mentorship, including the limited availability of experienced mentors, and the logistical difficulties of time and geography are some constraints of traditional mentorship. In this commentary, we discuss various models of artificial intelligence in medical education and mentoring, including expert systems, conversational agents, and hybrid models. These models enable tailored guidance, broaden outreach within the physical medicine and rehabilitation community, and support continuous learning and development. Balancing artificial intelligence's technical advantages with the essential human elements while addressing ethical considerations, artificial intelligence integration into medical education and mentorship presents a paradigm shift toward a more accessible, responsive, and enriched experience in rehabilitation medicine.
Collapse
Affiliation(s)
- Julie K Silver
- From the Department of Orthopedics, Wake Forest University School of Medicine, Winston-Salem, North Carolina (JKS); Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, Massachusetts (NG); Spaulding Rehabilitation Hospital, Charlestown, Massachusetts (MRD, NG); and MGH Institute of Health Professions, Boston, Massachusetts (NG)
| | | | | |
Collapse
|
22
|
Omar M, Brin D, Glicksberg B, Klang E. Utilizing natural language processing and large language models in the diagnosis and prediction of infectious diseases: A systematic review. Am J Infect Control 2024; 52:992-1001. [PMID: 38588980 DOI: 10.1016/j.ajic.2024.03.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 03/26/2024] [Accepted: 03/27/2024] [Indexed: 04/10/2024]
Abstract
BACKGROUND Natural Language Processing (NLP) and Large Language Models (LLMs) hold largely untapped potential in infectious disease management. This review explores their current use and uncovers areas needing more attention. METHODS This analysis followed systematic review procedures, registered with the Prospective Register of Systematic Reviews. We conducted a search across major databases including PubMed, Embase, Web of Science, and Scopus, up to December 2023, using keywords related to NLP, LLM, and infectious diseases. We also employed the Quality Assessment of Diagnostic Accuracy Studies-2 tool for evaluating the quality and robustness of the included studies. RESULTS Our review identified 15 studies with diverse applications of NLP in infectious disease management. Notable examples include GPT-4's application in detecting urinary tract infections and BERTweet's use in Lyme Disease surveillance through social media analysis. These models demonstrated effective disease monitoring and public health tracking capabilities. However, the effectiveness varied across studies. For instance, while some NLP tools showed high accuracy in pneumonia detection and high sensitivity in identifying invasive mold diseases from medical reports, others fell short in areas like bloodstream infection management. CONCLUSIONS This review highlights the yet-to-be-fully-realized promise of NLP and LLMs in infectious disease management. It calls for more exploration to fully harness AI's capabilities, particularly in the areas of diagnosis, surveillance, predicting disease courses, and tracking epidemiological trends.
Collapse
Affiliation(s)
- Mahmud Omar
- Tel-aviv university, Faculty of medicine, Tel-Aviv, Israel.
| | - Dana Brin
- Division of Diagnostic Imaging, Sheba Medical Center, Affiliated to Tel-Aviv University, Ramat Gan, Israel
| | - Benjamin Glicksberg
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY; The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY
| | - Eyal Klang
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY
| |
Collapse
|
23
|
Kim YI, Kim KH, Oh HJ, Seo Y, Kwon SM, Sung KS, Chong K, Lee MH. Assessing the Suitability of Artificial Intelligence-Based Chatbots as Counseling Agents for Patients with Brain Tumor: A Comprehensive Survey Analysis. World Neurosurg 2024; 187:e963-e981. [PMID: 38735564 DOI: 10.1016/j.wneu.2024.05.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 05/06/2024] [Indexed: 05/14/2024]
Abstract
OBJECTIVE The internet, particularly social media, has become a popular resource for learning about health and investigating one's own health conditions. The development of artificial intelligence (AI) chatbots has been fueled by the increasing availability of digital health data and advances in natural language processing techniques. While these chatbots are more accessible than before, they sometimes fail to provide accurate information. METHODS We used representative chatbots currently available (Chat Generative Pretrained Transformer-3.5, Bing Chat, and Google Bard) to answer questions commonly asked by brain tumor patients. The simulated situations with questions were made and selected by the brain tumor committee. These questions are commonly asked by brain tumor patients. The goal of the study was introduced to each chatbot, the situation was explained, and questions were asked. All responses were collected without modification. The answers were shown to the committee members, and they were asked to judge the responses while blinded to the type of chatbot. RESULTS There was no significant difference in accuracy and communication ability among the 3 groups (P = 0.253, 0.090, respectively). For empathy, Bing Chat and Google Bard were superior to Chat Generative Pretrained Transformer (P = 0.004, 0.002, respectively). The purpose of this study was not to assess or verify the relative superiority of each chatbot. Instead, the aim was to identify the shortcomings and changes needed if AI chatbots are to be used for patient medical purposes. CONCLUSION AI-based chatbots are a convenient way for patients and the general public to access medical information. Under such circumstances, medical professionals must ensure that the information provided to chatbot users is accurate and safe.
Collapse
Affiliation(s)
- Young Il Kim
- Department of Neurosurgery, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, South Korea
| | - Kyung Hwan Kim
- Department of Neurosurgery, Chungnam National University Hospital, Chungnam National University School of Medicine, Daejeon, South Korea
| | - Hyuk-Jin Oh
- Department of Neurosurgery, Soonchunhyang University Cheonan Hospital, Cheonan, South Korea
| | - Youngbeom Seo
- Department of Neurosurgery, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea
| | - Sae Min Kwon
- Department of Neurosurgery, Dongsan Medical Center, Keimyung University School of Medicine, Daegu, South Korea
| | - Kyoung Su Sung
- Department of Neurosurgery, Dong-A University Hospital, Dong-A University College of Medicine, Busan, South Korea
| | - Kyuha Chong
- Department of Neurosurgery, Brain Tumor Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Min Ho Lee
- Department of Neurosurgery, Uijeongbu St. Mary's Hospital, School of Medicine, The Catholic University of Korea, Seoul, South Korea.
| |
Collapse
|
24
|
Stalp JL, Denecke A, Jentschke M, Hillemanns P, Klapdor R. Quality of ChatGPT-Generated Therapy Recommendations for Breast Cancer Treatment in Gynecology. Curr Oncol 2024; 31:3845-3854. [PMID: 39057156 PMCID: PMC11275284 DOI: 10.3390/curroncol31070284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 06/20/2024] [Accepted: 06/28/2024] [Indexed: 07/28/2024] Open
Abstract
Introduction: Artificial intelligence (AI) is revolutionizing medical workflows, with self-learning systems like ChatGPT showing promise in therapy recommendations. Our study evaluated ChatGPT's performance in suggesting treatments for 30 breast cancer cases. AI's role in healthcare is expanding, particularly with tools like ChatGPT becoming accessible. However, understanding its limitations is vital for safe implementation. Material and Methods: We used 30 breast cancer cases from our medical board, assessing ChatGPT's suggestions. The input was standardized, incorporating relevant patient details and treatment options. ChatGPT's output was evaluated by oncologists based on a given questionnaire. Results: Treatment recommendations by ChatGPT were overall rated sufficient with minor limitations by the oncologists. The HER2 treatment category was the best-rated therapy option, with the most accurate recommendations. Primary cases received more accurate recommendations, especially regarding chemotherapy. Conclusions: While ChatGPT demonstrated potential, difficulties were shown in intricate cases and postoperative scenarios. Challenges arose in offering chronological treatment sequences and partially lacked precision. Refining inputs, addressing ethical intricacies, and ensuring chronological treatment suggestions are essential. Ongoing research is vital to improving AI's accuracy, balancing AI-driven suggestions with expert insights and ensuring safe and reliable AI integration into patient care.
Collapse
Affiliation(s)
- Jan Lennart Stalp
- Department of Obstetrics and Gynecology, Hannover Medical School, 30625 Hannover, Germany
| | - Agnieszka Denecke
- Department of Obstetrics and Gynecology, Hannover Medical School, 30625 Hannover, Germany
| | - Matthias Jentschke
- Department of Obstetrics and Gynecology, Hannover Medical School, 30625 Hannover, Germany
| | - Peter Hillemanns
- Department of Obstetrics and Gynecology, Hannover Medical School, 30625 Hannover, Germany
| | - Rüdiger Klapdor
- Department of Obstetrics and Gynecology, Hannover Medical School, 30625 Hannover, Germany
- Department of Obstetrics and Gynecology, Albertinen Hospital Hamburg, 22457 Hamburg, Germany
| |
Collapse
|
25
|
Reese A, Evancho P, Richards R, Arbel E, O'Shea A. To the Editor: An Urgent Call to Action to Integrate Artificial Intelligence Curriculum Into Medical Education. J Grad Med Educ 2024; 16:373. [PMID: 38882430 PMCID: PMC11173035 DOI: 10.4300/jgme-d-24-00282.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/18/2024] Open
Affiliation(s)
- Alyssa Reese
- is a Medical Student, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Peter Evancho
- is a Medical Student and Licensed Attorney, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Raymond Richards
- is a Medical Student, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Eylon Arbel
- is a Medical Student, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, USA; and
| | - Aidan O'Shea
- is a Medical Student, University of Rochester School of Medicine & Dentistry, Rochester, New York, USA
| |
Collapse
|
26
|
Artsi Y, Sorin V, Konen E, Glicksberg BS, Nadkarni G, Klang E. Large language models for generating medical examinations: systematic review. BMC MEDICAL EDUCATION 2024; 24:354. [PMID: 38553693 PMCID: PMC10981304 DOI: 10.1186/s12909-024-05239-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 02/28/2024] [Indexed: 04/01/2024]
Abstract
BACKGROUND Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs. METHODS The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool. RESULTS Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify. CONCLUSIONS LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
Collapse
Affiliation(s)
- Yaara Artsi
- Azrieli Faculty of Medicine, Bar-Ilan University, Ha'Hadas St. 1, Rishon Le Zion, Zefat, 7550598, Israel.
| | - Vera Sorin
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Ramat Gan, Israel
- Tel-Aviv University School of Medicine, Tel Aviv, Israel
- DeepVision Lab, Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Chaim Sheba Medical Center, Ramat Gan, Israel
- Tel-Aviv University School of Medicine, Tel Aviv, Israel
| | - Benjamin S Glicksberg
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
27
|
Weidener L, Fischer M. Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education. JMIR MEDICAL EDUCATION 2024; 10:e55368. [PMID: 38285931 PMCID: PMC10891487 DOI: 10.2196/55368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/02/2024] [Accepted: 01/29/2024] [Indexed: 01/31/2024]
Abstract
The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on "informed consent" in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics-autonomy, beneficence, nonmaleficence, and justice-and extending them by integrating 3 public health ethics principles-efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.
Collapse
Affiliation(s)
- Lukas Weidener
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
28
|
Weidener L, Fischer M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024; 3:e51204. [PMID: 38875585 PMCID: PMC11041491 DOI: 10.2196/51204] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/20/2023] [Accepted: 12/09/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies. OBJECTIVE This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians. METHODS Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study. RESULTS The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue. CONCLUSIONS Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.
Collapse
Affiliation(s)
- Lukas Weidener
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
29
|
Weidener L, Fischer M. Artificial Intelligence in Medicine: Cross-Sectional Study Among Medical Students on Application, Education, and Ethical Aspects. JMIR MEDICAL EDUCATION 2024; 10:e51247. [PMID: 38180787 PMCID: PMC10799276 DOI: 10.2196/51247] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/26/2023] [Accepted: 12/02/2023] [Indexed: 01/06/2024]
Abstract
BACKGROUND The use of artificial intelligence (AI) in medicine not only directly impacts the medical profession but is also increasingly associated with various potential ethical aspects. In addition, the expanding use of AI and AI-based applications such as ChatGPT demands a corresponding shift in medical education to adequately prepare future practitioners for the effective use of these tools and address the associated ethical challenges they present. OBJECTIVE This study aims to explore how medical students from Germany, Austria, and Switzerland perceive the use of AI in medicine and the teaching of AI and AI ethics in medical education in accordance with their use of AI-based chat applications, such as ChatGPT. METHODS This cross-sectional study, conducted from June 15 to July 15, 2023, surveyed medical students across Germany, Austria, and Switzerland using a web-based survey. This study aimed to assess students' perceptions of AI in medicine and the integration of AI and AI ethics into medical education. The survey, which included 53 items across 6 sections, was developed and pretested. Data analysis used descriptive statistics (median, mode, IQR, total number, and percentages) and either the chi-square or Mann-Whitney U tests, as appropriate. RESULTS Surveying 487 medical students across Germany, Austria, and Switzerland revealed limited formal education on AI or AI ethics within medical curricula, although 38.8% (189/487) had prior experience with AI-based chat applications, such as ChatGPT. Despite varied prior exposures, 71.7% (349/487) anticipated a positive impact of AI on medicine. There was widespread consensus (385/487, 74.9%) on the need for AI and AI ethics instruction in medical education, although the current offerings were deemed inadequate. Regarding the AI ethics education content, all proposed topics were rated as highly relevant. CONCLUSIONS This study revealed a pronounced discrepancy between the use of AI-based (chat) applications, such as ChatGPT, among medical students in Germany, Austria, and Switzerland and the teaching of AI in medical education. To adequately prepare future medical professionals, there is an urgent need to integrate the teaching of AI and AI ethics into the medical curricula.
Collapse
Affiliation(s)
- Lukas Weidener
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
30
|
Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol 2023; 16:114. [PMID: 38012673 PMCID: PMC10680201 DOI: 10.1186/s13045-023-01514-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/20/2023] [Indexed: 11/29/2023] Open
Abstract
Research into the potential benefits of artificial intelligence for comprehending the intricate biology of cancer has grown as a result of the widespread use of deep learning and machine learning in the healthcare sector and the availability of highly specialized cancer datasets. Here, we review new artificial intelligence approaches and how they are being used in oncology. We describe how artificial intelligence might be used in the detection, prognosis, and administration of cancer treatments and introduce the use of the latest large language models such as ChatGPT in oncology clinics. We highlight artificial intelligence applications for omics data types, and we offer perspectives on how the various data types might be combined to create decision-support tools. We also evaluate the present constraints and challenges to applying artificial intelligence in precision oncology. Finally, we discuss how current challenges may be surmounted to make artificial intelligence useful in clinical settings in the future.
Collapse
Affiliation(s)
- Chaoyi Zhang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jin Xu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Rong Tang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jianhui Yang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Wang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Xianjun Yu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| | - Si Shi
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| |
Collapse
|