1
|
Alghitran A, AlOsaimi HM, Albuluwi A, Almalki EO, Aldowayan AZ, Alharthi R, Qattan JM, Alghamdi F, AlHalabi M, Almalki NA, Alharthi A, Alshammari A, Kanan M. Integrating ChatGPT as a Tool in Pharmacy Practice: A Cross-Sectional Exploration Among Pharmacists in Saudi Arabia. INTEGRATED PHARMACY RESEARCH AND PRACTICE 2025; 14:31-43. [PMID: 40125532 PMCID: PMC11927492 DOI: 10.2147/iprp.s500689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Accepted: 02/21/2025] [Indexed: 03/25/2025] Open
Abstract
Purpose Artificial Intelligence (AI), especially ChatGPT, is rapidly assimilating into healthcare, providing significant advantages in pharmacy practice, such as improved clinical decision-making, patient counselling, and drug information management. The adoption of AI tools is heavily contingent upon pharmacy practitioners' knowledge, attitudes, and practices (KAP). This study sought to evaluate the knowledge and practices of pharmacists in Saudi Arabia concerning the utilization of ChatGPT in their daily activities. Patients and Methods A cross-sectional study was performed from May 2023 to July 2024 including pharmacists in Riyadh, Saudi Arabia. An online pre-validated KAP questionnaire was disseminated, collecting data on demographics, knowledge, attitudes, and practices about ChatGPT. Descriptive statistics and regression analyses were conducted using SPSS. Results Of 1022 respondents, 78.7% were familiar with AI in pharmacy, while 90.1% correctly identified ChatGPT as an advanced AI chatbot. Positive attitudes towards ChatGPT were reported by 64.1% of pharmacists, although only 24.3% used AI tools regularly. Significant predictors of positive attitudes and practices included academic/research roles (β=0.7, p=0.005) and 6-10 years of experience (β=0.9, p=0.05). Ethical concerns were raised by 64% of respondents, and 92% reported a lack of formal training. Conclusion While the majority of pharmacists held positive attitudes toward ChatGPT, practical implementation remains limited due to ethical concerns and inadequate training. Addressing these barriers is essential for successful AI integration in pharmacy, supporting Saudi Arabia's Vision 2030 initiative.
Collapse
Affiliation(s)
- Abdulrahman Alghitran
- A General Administration of Pharmaceutical Care, Ministry of Health, Riyadh, Saudi Arabia
| | - Hind M AlOsaimi
- Department of Pharmacy Services Administration, King Fahad Medical City, Riyadh Second Health Cluster, Riyadh, Saudi Arabia
| | - Ahmad Albuluwi
- A General Administration of Pharmaceutical Care, Ministry of Health, Riyadh, Saudi Arabia
| | | | | | - Rakan Alharthi
- College of Pharmacy, Taif University, Taif, 26513, Saudi Arabia
| | | | - Fahd Alghamdi
- College of Pharmacy, Taif University, Taif, 26513, Saudi Arabia
| | | | | | | | - Asma Alshammari
- Department of Pharmaceutical Care, King Abdulaziz Medical City, Riyadh, Saudi Arabia
| | - Muhammad Kanan
- Department of Pharmacy Services Administration, Rafha General Hospital, Northern Border Cluster, Saudi Arabia
| |
Collapse
|
2
|
Laohawetwanit T, Namboonlue C, Apornvirat S. Accuracy of GPT-4 in histopathological image detection and classification of colorectal adenomas. J Clin Pathol 2025; 78:202-207. [PMID: 38199797 DOI: 10.1136/jcp-2023-209304] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 01/03/2024] [Indexed: 01/12/2024]
Abstract
AIMS To evaluate the accuracy of Chat Generative Pre-trained Transformer (ChatGPT) powered by GPT-4 in histopathological image detection and classification of colorectal adenomas using the diagnostic consensus provided by pathologists as a reference standard. METHODS A study was conducted with 100 colorectal polyp photomicrographs, comprising an equal number of adenomas and non-adenomas, classified by two pathologists. These images were analysed by classic GPT-4 for 1 time in October 2023 and custom GPT-4 for 20 times in December 2023. GPT-4's responses were compared against the reference standard through statistical measures to evaluate its proficiency in histopathological diagnosis, with the pathologists further assessing the model's descriptive accuracy. RESULTS GPT-4 demonstrated a median sensitivity of 74% and specificity of 36% for adenoma detection. The median accuracy of polyp classification varied, ranging from 16% for non-specific changes to 36% for tubular adenomas. Its diagnostic consistency, indicated by low kappa values ranging from 0.06 to 0.11, suggested only poor to slight agreement. All of the microscopic descriptions corresponded with their diagnoses. GPT-4 also commented about the limitations in its diagnoses (eg, slide diagnosis best done by pathologists, the inadequacy of single-image diagnostic conclusions, the need for clinical data and a higher magnification view). CONCLUSIONS GPT-4 showed high sensitivity but low specificity in detecting adenomas and varied accuracy for polyp classification. However, its diagnostic consistency was low. This artificial intelligence tool acknowledged its diagnostic limitations, emphasising the need for a pathologist's expertise and additional clinical context.
Collapse
Affiliation(s)
- Thiyaphat Laohawetwanit
- Division of Pathology, Chulabhorn International College of Medicine, Thammasat University, Pathum Thani, Thailand
- Division of Pathology, Thammasat University Hospital, Pathum Thani, Thailand
| | | | - Sompon Apornvirat
- Division of Pathology, Chulabhorn International College of Medicine, Thammasat University, Pathum Thani, Thailand
- Division of Pathology, Thammasat University Hospital, Pathum Thani, Thailand
| |
Collapse
|
3
|
Andrikyan W, Sametinger SM, Kosfeld F, Jung-Poppe L, Fromm MF, Maas R, Nicolaus HF. Artificial intelligence-powered chatbots in search engines: a cross-sectional study on the quality and risks of drug information for patients. BMJ Qual Saf 2025; 34:100-109. [PMID: 39353736 PMCID: PMC11874309 DOI: 10.1136/bmjqs-2024-017476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 08/22/2024] [Indexed: 10/04/2024]
Abstract
BACKGROUND Search engines often serve as a primary resource for patients to obtain drug information. However, the search engine market is rapidly changing due to the introduction of artificial intelligence (AI)-powered chatbots. The consequences for medication safety when patients interact with chatbots remain largely unexplored. OBJECTIVE To explore the quality and potential safety concerns of answers provided by an AI-powered chatbot integrated within a search engine. METHODOLOGY Bing copilot was queried on 10 frequently asked patient questions regarding the 50 most prescribed drugs in the US outpatient market. Patient questions covered drug indications, mechanisms of action, instructions for use, adverse drug reactions and contraindications. Readability of chatbot answers was assessed using the Flesch Reading Ease Score. Completeness and accuracy were evaluated based on corresponding patient drug information in the pharmaceutical encyclopaedia drugs.com. On a preselected subset of inaccurate chatbot answers, healthcare professionals evaluated likelihood and extent of possible harm if patients follow the chatbot's given recommendations. RESULTS Of 500 generated chatbot answers, overall readability implied that responses were difficult to read according to the Flesch Reading Ease Score. Overall median completeness and accuracy of chatbot answers were 100.0% (IQR 50.0-100.0%) and 100.0% (IQR 88.1-100.0%), respectively. Of the subset of 20 chatbot answers, experts found 66% (95% CI 50% to 85%) to be potentially harmful. 42% (95% CI 25% to 60%) of these 20 chatbot answers were found to potentially cause moderate to mild harm, and 22% (95% CI 10% to 40%) to cause severe harm or even death if patients follow the chatbot's advice. CONCLUSIONS AI-powered chatbots are capable of providing overall complete and accurate patient drug information. Yet, experts deemed a considerable number of answers incorrect or potentially harmful. Furthermore, complexity of chatbot answers may limit patient understanding. Hence, healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available.
Collapse
Affiliation(s)
- Wahram Andrikyan
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sophie Marie Sametinger
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Frithjof Kosfeld
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- GSK, Wavre, Belgium
| | - Lea Jung-Poppe
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Universitätsklinikum Erlangen, Erlangen, Germany
| | - Martin F Fromm
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- FAU NeW-Research Center New Bioactive Compounds, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Renke Maas
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- FAU NeW-Research Center New Bioactive Compounds, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Hagen F Nicolaus
- Institute of Experimental and Clinical Pharmacology and Toxicology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Universitätsklinikum Erlangen, Erlangen, Germany
| |
Collapse
|
4
|
Jaber SA, Hasan HE, Alzoubi KH, Khabour OF. Knowledge, attitude, and perceptions of MENA researchers towards the use of ChatGPT in research: A cross-sectional study. Heliyon 2025; 11:e41331. [PMID: 39811375 PMCID: PMC11731567 DOI: 10.1016/j.heliyon.2024.e41331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 12/03/2024] [Accepted: 12/17/2024] [Indexed: 01/16/2025] Open
Abstract
Background Artificial intelligence (AI) technologies are increasingly recognized for their potential to revolutionize research practices. However, there is a gap in understanding the perspectives of MENA researchers on ChatGPT. This study explores the knowledge, attitudes, and perceptions of ChatGPT utilization in research. Methods A cross-sectional survey was conducted among 369 MENA researchers. Participants provided demographic information and responded to questions about their knowledge of AI, their experience with ChatGPT, their attitudes toward technology, and their perceptions of the potential roles and benefits of ChatGPT in research. Results The results indicate a moderate level of knowledge about ChatGPT, with a total score of 58.3 ± 19.6. Attitudes towards its use were generally positive, with a total score of 68.1 ± 8.1 expressing enthusiasm for integrating ChatGPT into their research workflow. About 56 % of the sample reported using ChatGPT for various applications. In addition, 27.6 % expressed their intention to use it in their research, while 17.3 % have already started using it in their research. However, perceptions varied, with concerns about accuracy, bias, and ethical implications highlighted. The results showed significant differences in knowledge scores based on gender (p < 0.001), working country (p < 0.05), and work field (p < 0.01). Regarding attitude scores, there were significant differences based on the highest qualification and the employment field (p < 0.05). These findings underscore the need for targeted training programs and ethical guidelines to support the effective use of ChatGPT in research. Conclusion MENA researchers demonstrate significant awareness and interest in integrating ChatGPT into their research workflow. Addressing concerns about reliability and ethical implications is essential for advancing scientific innovation in the MENA region.
Collapse
Affiliation(s)
- Sana'a A. Jaber
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Hisham E. Hasan
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Karem H. Alzoubi
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Omar F. Khabour
- Department of Medical Laboratory Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, 22110, Jordan
| |
Collapse
|
5
|
Cheng HY. ChatGPT's Attitude, Knowledge, and Clinical Application in Geriatrics Practice and Education: Exploratory Observational Study. JMIR Form Res 2025; 9:e63494. [PMID: 39752214 PMCID: PMC11742095 DOI: 10.2196/63494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 10/26/2024] [Accepted: 11/17/2024] [Indexed: 01/04/2025] Open
Abstract
BACKGROUND The increasing use of ChatGPT in clinical practice and medical education necessitates the evaluation of its reliability, particularly in geriatrics. OBJECTIVE This study aimed to evaluate ChatGPT's trustworthiness in geriatrics through 3 distinct approaches: evaluating ChatGPT's geriatrics attitude, knowledge, and clinical application with 2 vignettes of geriatric syndromes (polypharmacy and falls). METHODS We used the validated University of California, Los Angeles, geriatrics attitude and knowledge instruments to evaluate ChatGPT's geriatrics attitude and knowledge and compare its performance with that of medical students, residents, and geriatrics fellows from reported results in the literature. We also evaluated ChatGPT's application to 2 vignettes of geriatric syndromes (polypharmacy and falls). RESULTS The mean total score on geriatrics attitude of ChatGPT was significantly lower than that of trainees (medical students, internal medicine residents, and geriatric medicine fellows; 2.7 vs 3.7 on a scale from 1-5; 1=strongly disagree; 5=strongly agree). The mean subscore on positive geriatrics attitude of ChatGPT was higher than that of the trainees (medical students, internal medicine residents, and neurologists; 4.1 vs 3.7 on a scale from 1 to 5 where a higher score means a more positive attitude toward older adults). The mean subscore on negative geriatrics attitude of ChatGPT was lower than that of the trainees and neurologists (1.8 vs 2.8 on a scale from 1 to 5 where a lower subscore means a less negative attitude toward aging). On the University of California, Los Angeles geriatrics knowledge test, ChatGPT outperformed all medical students, internal medicine residents, and geriatric medicine fellows from validated studies (14.7 vs 11.3 with a score range of -18 to +18 where +18 means that all questions were answered correctly). Regarding the polypharmacy vignette, ChatGPT not only demonstrated solid knowledge of potentially inappropriate medications but also accurately identified 7 common potentially inappropriate medications and 5 drug-drug and 3 drug-disease interactions. However, ChatGPT missed 5 drug-disease and 1 drug-drug interaction and produced 2 hallucinations. Regarding the fall vignette, ChatGPT answered 3 of 5 pretests correctly and 2 of 5 pretests partially correctly, identified 6 categories of fall risks, followed fall guidelines correctly, listed 6 key physical examinations, and recommended 6 categories of fall prevention methods. CONCLUSIONS This study suggests that ChatGPT can be a valuable supplemental tool in geriatrics, offering reliable information with less age bias, robust geriatrics knowledge, and comprehensive recommendations for managing 2 common geriatric syndromes (polypharmacy and falls) that are consistent with evidence from guidelines, systematic reviews, and other types of studies. ChatGPT's potential as an educational and clinical resource could significantly benefit trainees, health care providers, and laypeople. Further research using GPT-4o, larger geriatrics question sets, and more geriatric syndromes is needed to expand and confirm these findings before adopting ChatGPT widely for geriatrics education and practice.
Collapse
Affiliation(s)
- Huai Yong Cheng
- Minneapolis VA Health Care System, Minneapolis, MN, United States
| |
Collapse
|
6
|
Ramasubramanian S, Balaji S, Kannan T, Jeyaraman N, Sharma S, Migliorini F, Balasubramaniam S, Jeyaraman M. Comparative evaluation of artificial intelligence systems' accuracy in providing medical drug dosages: A methodological study. World J Methodol 2024; 14:92802. [PMID: 39712564 PMCID: PMC11287534 DOI: 10.5662/wjm.v14.i4.92802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/29/2024] [Accepted: 06/25/2024] [Indexed: 07/26/2024] Open
Abstract
BACKGROUND Medication errors, especially in dosage calculation, pose risks in healthcare. Artificial intelligence (AI) systems like ChatGPT and Google Bard may help reduce errors, but their accuracy in providing medication information remains to be evaluated. AIM To evaluate the accuracy of AI systems (ChatGPT 3.5, ChatGPT 4, Google Bard) in providing drug dosage information per Harrison's Principles of Internal Medicine. METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems. Responses were analyzed using a 3-point Likert scale. The analysis, conducted with Python and its libraries, focused on basic statistics, overall system accuracy, and disease-specific and organ system accuracies. RESULTS ChatGPT 4 outperformed the other systems, showing the highest rate of correct responses (83.77%) and the best overall weighted accuracy (0.6775). Disease-specific accuracy varied notably across systems, with some diseases being accurately recognized, while others demonstrated significant discrepancies. Organ system accuracy also showed variable results, underscoring system-specific strengths and weaknesses. CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information, yet variations across diseases emphasize the need for ongoing improvements. These results highlight AI's potential in aiding healthcare professionals, urging continuous development for dependable accuracy in critical medical situations.
Collapse
Affiliation(s)
- Swaminathan Ramasubramanian
- Department of Orthopaedics, Government Medical College, Omandurar Government Estate, Chennai 600002, Tamil Nadu, India
| | - Sangeetha Balaji
- Department of Orthopaedics, Government Medical College, Omandurar Government Estate, Chennai 600002, Tamil Nadu, India
| | - Tejashri Kannan
- Department of Orthopaedics, Government Medical College, Omandurar Government Estate, Chennai 600002, Tamil Nadu, India
| | - Naveen Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India
| | - Shilpa Sharma
- Department of Paediatric Surgery, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Filippo Migliorini
- Department of Life Sciences, Health, Link Campus University, Rome 00165, Italy
- Department of Orthopaedic and Trauma Surgery, Academic Hospital of Bolzano (SABES-ASDAA), Teaching Hospital of the Paracelsus Medical University, Bolzano 39100, Italy
| | - Suhasini Balasubramaniam
- Department of Radio-Diagnosis, Government Stanley Medical College and Hospital, Chennai 600001, Tamil Nadu, India
| | - Madhan Jeyaraman
- Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India
| |
Collapse
|
7
|
Taha TAEA, Abdel-Qader DH, Alamiry KR, Fadl ZA, Alrawi A, Abdelsattar NK. Perception, concerns, and practice of ChatGPT among Egyptian pharmacists: a cross-sectional study in Egypt. BMC Health Serv Res 2024; 24:1500. [PMID: 39609697 PMCID: PMC11605968 DOI: 10.1186/s12913-024-11815-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 10/22/2024] [Indexed: 11/30/2024] Open
Abstract
BACKGROUND The emergence of large language models (LLMs) like ChatGPT attracted significant attention for their potential to revolutionize pharmacy practice. While artificial intelligence (AI) offers promising benefits, its integration also presents unique challenges. OBJECTIVES This cross-sectional study aimed to explore the current Egyptian pharmacists' perceptions, practices, and concerns regarding ChatGPT in pharmacy practice. METHODS The study questionnaire was shared with pharmacists during March and April 2024. We included pharmacists licensed by the Egyptian Ministry of Health and Population. We adapted a convenient sampling technique by sending the research questionnaire via emails, student networks, social media (Facebook and WhatsApp), and student organizations. Any pharmacist interested in participating followed a link to review the study description and was asked to provide electronic consent before continuing with the study. Data were analyzed using SPSS software, employing Chi-square tests for categorical variables and Spearman's correlation for continuous variables. Statistical significance was set at p < 0.05. RESULTS The study sample size included 428 pharmacists from the main economic regions of Egypt. The results revealed a strong recognition (73.6%) among participants of ChatGPT's anticipated benefits within pharmacy practice. Around two-thirds of the participants (65.9%) expressed disagreement or neutrality regarding the application of ChatGPT for analyzing patients' medical inputs and providing individualized medical advice. Regarding factors affecting perception, we found that the region is the only factor that significantly contributed to the level of perception among pharmacists (P = 0.011) with Greater cairo region showing the highest perception level. We found that 73.6% of participants who have heard about ChatGPT reported high levels of concern. One-third of participants never use ChatGPT in their pharmacy work, and 20% rarely use it. Using Spearman's correlation test, there was no significant correlation between anticipated advantages, concerns and practice level (P > 0.05). CONCLUSION This study reveals a generally positive perception of ChatGPT's potential benefits among Egyptian pharmacists, despite existing concerns regarding accuracy, data privacy, and bias. Notably, no significant associations were found between demographic factors and pharmacists' perceptions, practices, or concerns. This underscores the need for comprehensive educational initiatives to promote informed and responsible ChatGPT utilization within pharmacy practice. Future research should explore the development and implementation of tailored training programs and guidelines to ensure the safe and effective integration of ChatGPT into pharmacy workflows for optimal patient care.
Collapse
Affiliation(s)
| | - Derar H Abdel-Qader
- Faculty of Pharmacy and Medical Sciences, The University of Petra, Amman, Jordan
| | | | - Zeyad A Fadl
- Faculty of Medicine, Fayoum University, Fayoum, Egypt
| | - Aya Alrawi
- Faculty of Medicine, Fayoum University, Fayoum, Egypt
| | | |
Collapse
|
8
|
Bazzari AH, Bazzari FH. Assessing the ability of GPT-4o to visually recognize medications and provide patient education. Sci Rep 2024; 14:26749. [PMID: 39501020 PMCID: PMC11538418 DOI: 10.1038/s41598-024-78577-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/01/2024] [Indexed: 11/08/2024] Open
Abstract
Various studies have investigated the ability of ChatGPT (OpenAI) to provide medication information; however, a new promising feature has now been added, which allows visual input and is yet to be evaluated. Here, we aimed to qualitatively assess its ability to visually recognize medications, through medication picture input, and provide patient education via written and visual output. The responses were evaluated by accuracy, precision and clarity using a 4-point Likert-like scale. In regards to handling visual input and providing written responses, GPT-4o was able to recognize all 20 tested medications from packaging pictures, even with blurring, retrieve their active ingredients, identify formulations and dosage forms and provide detailed, yet concise enough, patient education in an almost completely accurate, precise and clear manner with a score of 3.55 ± 0.605 (85%). In contrast, the visual output through GPT-4o generated images illustrating usage instructions contained many errors that would either hinder the effectiveness of the medication or cause direct harm to the patient with a poor score of 1.5 ± 0.577 (16.7%). In conclusion, GPT-4o is capable of identifying medications from pictures and exhibits contrasting patient education performance between written and visual output with very impressive and poor scores, respectively.
Collapse
Affiliation(s)
- Amjad H Bazzari
- Department of Basic Scientific Sciences, Faculty of Science, Applied Science Private University, Amman, 11931, Jordan.
| | - Firas H Bazzari
- Faculty of Pharmacy, Jerash University, Jerash, 26150, Jordan
| |
Collapse
|
9
|
Grossman S, Zerilli T, Nathan JP. Appropriateness of ChatGPT as a resource for medication-related questions. Br J Clin Pharmacol 2024; 90:2691-2695. [PMID: 39096130 DOI: 10.1111/bcp.16212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 07/04/2024] [Accepted: 07/22/2024] [Indexed: 08/04/2024] Open
Abstract
With its increasing popularity, healthcare professionals and patients may use ChatGPT to obtain medication-related information. This study was conducted to assess ChatGPT's ability to provide satisfactory responses (i.e., directly answers the question, accurate, complete and relevant) to medication-related questions posed to an academic drug information service. ChatGPT responses were compared to responses generated by the investigators through the use of traditional resources, and references were evaluated. Thirty-nine questions were entered into ChatGPT; the three most common categories were therapeutics (8; 21%), compounding/formulation (6; 15%) and dosage (5; 13%). Ten (26%) questions were answered satisfactorily by ChatGPT. Of the 29 (74%) questions that were not answered satisfactorily, deficiencies included lack of a direct response (11; 38%), lack of accuracy (11; 38%) and/or lack of completeness (12; 41%). References were included with eight (29%) responses; each included fabricated references. Presently, healthcare professionals and consumers should be cautioned against using ChatGPT for medication-related information.
Collapse
Affiliation(s)
- Sara Grossman
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| | - Tina Zerilli
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| | - Joseph P Nathan
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| |
Collapse
|
10
|
van Nuland M, Erdogan A, Aςar C, Contrucci R, Hilbrants S, Maanach L, Egberts T, van der Linden PD. Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy. J Clin Pharmacol 2024; 64:1095-1100. [PMID: 38623909 DOI: 10.1002/jcph.2443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 03/25/2024] [Indexed: 04/17/2024]
Abstract
ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.
Collapse
Affiliation(s)
- Merel van Nuland
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands
| | - Abdullah Erdogan
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands
| | - Cenkay Aςar
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands
| | - Ramon Contrucci
- Department of Clinical Pharmacy, Amphia Hospital, Breda, The Netherlands
| | - Sven Hilbrants
- Department of Clinical Pharmacy, Leeuwarden Medical Center, Leeuwarden, The Netherlands
| | - Lamyae Maanach
- Department of Clinical Pharmacy, Haga Hospital, The Hague, The Netherlands
| | - Toine Egberts
- Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Division of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Faculty of Science, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, The Netherlands
| | | |
Collapse
|
11
|
van Nuland M, Lobbezoo AFH, van de Garde EM, Herbrink M, van Heijl I, Bognàr T, Houwen JP, Dekens M, Wannet D, Egberts T, van der Linden PD. Assessing accuracy of ChatGPT in response to questions from day to day pharmaceutical care in hospitals. EXPLORATORY RESEARCH IN CLINICAL AND SOCIAL PHARMACY 2024; 15:100464. [PMID: 39050145 PMCID: PMC11267013 DOI: 10.1016/j.rcsop.2024.100464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/26/2024] [Accepted: 06/06/2024] [Indexed: 07/27/2024] Open
Abstract
Background The advent of Large Language Models (LLMs) such as ChatGPT introduces opportunities within the medical field. Nonetheless, use of LLM poses a risk when healthcare practitioners and patients present clinical questions to these programs without a comprehensive understanding of its suitability for clinical contexts. Objective The objective of this study was to assess ChatGPT's ability to generate appropriate responses to clinical questions that hospital pharmacists could encounter during routine patient care. Methods Thirty questions from 10 different domains within clinical pharmacy were collected during routine care. Questions were presented to ChatGPT in a standardized format, including patients' age, sex, drug name, dose, and indication. Subsequently, relevant information regarding specific cases were provided, and the prompt was concluded with the query "what would a hospital pharmacist do?". The impact on accuracy was assessed for each domain by modifying personification to "what would you do?", presenting the question in Dutch, and regenerating the primary question. All responses were independently evaluated by two senior hospital pharmacists, focusing on the availability of an advice, accuracy and concordance. Results In 77% of questions, ChatGPT provided an advice in response to the question. For these responses, accuracy and concordance were determined. Accuracy was correct and complete for 26% of responses, correct but incomplete for 22% of responses, partially correct and partially incorrect for 30% of responses and completely incorrect for 22% of responses. The reproducibility was poor, with merely 10% of responses remaining consistent upon regeneration of the primary question. Conclusions While concordance of responses was excellent, the accuracy and reproducibility were poor. With the described method, ChatGPT should not be used to address questions encountered by hospital pharmacists during their shifts. However, it is important to acknowledge the limitations of our methodology, including potential biases, which may have influenced the findings.
Collapse
Affiliation(s)
- Merel van Nuland
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, the Netherlands
| | - Anne-Fleur H. Lobbezoo
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, the Netherlands
- Department of Pharmacy, St. Antonius Hospital, Utrecht, Nieuwegein, the Netherlands
| | - Ewoudt M.W. van de Garde
- Department of Pharmacy, St. Antonius Hospital, Utrecht, Nieuwegein, the Netherlands
- Division of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Faculty of Science, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, the Netherlands
| | - Maikel Herbrink
- Department of Clinical Pharmacy, Meander Medical Center, Amersfoort, the Netherlands
| | - Inger van Heijl
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, the Netherlands
| | - Tim Bognàr
- Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Jeroen P.A. Houwen
- Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Marloes Dekens
- Department of Pharmacy, St. Antonius Hospital, Utrecht, Nieuwegein, the Netherlands
| | - Demi Wannet
- Department of Clinical Pharmacy, Meander Medical Center, Amersfoort, the Netherlands
| | - Toine Egberts
- Division of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Faculty of Science, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, the Netherlands
- Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Paul D. van der Linden
- Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, the Netherlands
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
12
|
Hoti K, Weidmann AE. Encouraging dissemination of research on the use of artificial intelligence and related innovative technologies in clinical pharmacy practice and education: call for papers. Int J Clin Pharm 2024; 46:777-779. [PMID: 39046690 DOI: 10.1007/s11096-024-01777-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 07/05/2024] [Indexed: 07/25/2024]
Affiliation(s)
- Kreshnik Hoti
- Division of Pharmacy, Department of Pharmacy Practice and Pharmaceutical Care, Faculty of Medicine, University of Pristina, Prishtina, Kosovo
| | - Anita Elaine Weidmann
- Innsbruck University, Innsbruck, Austria.
- International Journal of Clinical Pharmacy and Research Committee, European Society of Clinical Pharmacy, Chaam, The Netherlands.
| |
Collapse
|
13
|
van Nuland M, Snoep JD, Egberts T, Erdogan A, Wassink R, van der Linden PD. Poor performance of ChatGPT in clinical rule-guided dose interventions in hospitalized patients with renal dysfunction. Eur J Clin Pharmacol 2024; 80:1133-1140. [PMID: 38592470 DOI: 10.1007/s00228-024-03687-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 04/03/2024] [Indexed: 04/10/2024]
Abstract
PURPOSE Clinical decision support systems (CDSS) are used to identify drugs with potential need for dose modification in patients with renal impairment. ChatGPT holds the potential to be integrated in the electronic health record (EHR) system to give such dosing advices. In this study, we aim to evaluate the performance of ChatGPT in clinical rule-guided dose interventions in hospitalized patients with renal impairment. METHODS This cross-sectional study was performed at Tergooi Medical Center, the Netherlands. CDSS alerts regarding renal dysfunction were collected from the electronic health record (EHR) during a 2-week period and were presented to ChatGPT and an expert panel. Alerts were presented with and without patient variables. To evaluate the performance, suggested medication interventions were compared. RESULTS In total, 172 CDDS alerts were generated for 80 patients. Indecisive responses by ChatGPT to alerts were excluded. For alerts presented without patient variables, ChatGPT provided "correct and identical" responses to 19.9%, "correct and different" responses to 26.7%, and "incorrect responses to 53.4% of the alerts. For alerts including patient variables, ChatGPT provided "correct and identical" responses to 16.7%, "correct and different" responses to 16.0%, and "incorrect responses to 67.3% of the alerts. Accuracy was better for newer drugs such as direct oral anticoagulants. CONCLUSION The performance of ChatGPT in clinical rule-guided dose interventions in hospitalized patients with renal dysfunction was poor. Based on these results, we conclude that ChatGPT, in its current state, is not appropriate for automatic integration into our EHR to handle CDSS alerts related to renal dysfunction.
Collapse
Affiliation(s)
- Merel van Nuland
- Department of Clinical Pharmacy, Tergooi Medical Center, Laan van Tergooi 2, 1212 VG, Hilversum, The Netherlands.
| | - JaapJan D Snoep
- Department of Nephrology, Tergooi Medical Center, Hilversum, The Netherlands
| | - Toine Egberts
- Department of Clinical Pharmacy, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Division of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Faculty of Science, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, The Netherlands
| | - Abdullah Erdogan
- Department of Clinical Pharmacy, Tergooi Medical Center, Laan van Tergooi 2, 1212 VG, Hilversum, The Netherlands
| | - Ricky Wassink
- Department of Clinical Pharmacy, Tergooi Medical Center, Laan van Tergooi 2, 1212 VG, Hilversum, The Netherlands
| | - Paul D van der Linden
- Department of Clinical Pharmacy, Tergooi Medical Center, Laan van Tergooi 2, 1212 VG, Hilversum, The Netherlands
| |
Collapse
|
14
|
Cornelison BR, Erstad BL, Edwards C. Accuracy of a chatbot in answering questions that patients should ask before taking a new medication. J Am Pharm Assoc (2003) 2024; 64:102110. [PMID: 38670493 DOI: 10.1016/j.japh.2024.102110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 04/16/2024] [Accepted: 04/19/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND The potential uses of artificial intelligence have extended into the fields of health care delivery and education. However, challenges are associated with introducing innovative technologies into health care, particularly with respect to information quality. OBJECTIVE This study aimed to evaluate the accuracy of answers provided by a chatbot in response to questions that patients should ask before taking a new medication. METHODS Twelve questions obtained from the Agency for Healthcare Research and Quality were asked to a chatbot for the top 20 drugs. Two reviewers independently evaluated and rated each response on a 6-point scale for accuracy and a 3-point scale for completeness with a score of 2 considered adequate. Accuracy was determined using clinical expertise and a drug information database. After the independent reviews, answers were compared, and discrepancies were assigned a consensus score. RESULTS Of 240 responses, 222 (92.5%) were assessed as completely accurate. Of the inaccurate responses, 10 (4.2%) were mostly accurate, 5 (2.1%) were more accurate than inaccurate, 2 (0.8%) were equal parts accurate and inaccurate, and 1 (0.4%) was more inaccurate than accurate. Of the 240 responses, 194 (80.8%) were comprehensively complete. There were 235 (97.9%) responses that scored 2 or higher. Five responses (2.1%) were considered incomplete. CONCLUSION Using a chatbot to answer questions commonly asked by patients is mostly accurate but may include inaccurate information or lack valuable information for patients.
Collapse
|
15
|
Ozturk N, Yakak I, Ağ MB, Aksoy N. Is ChatGPT reliable and accurate in answering pharmacotherapy-related inquiries in both Turkish and English? CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102101. [PMID: 38702261 DOI: 10.1016/j.cptl.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 04/23/2024] [Accepted: 04/26/2024] [Indexed: 05/06/2024]
Abstract
INTRODUCTION Artificial intelligence (AI), particularly ChatGPT, is becoming more and more prevalent in the healthcare field for tasks such as disease diagnosis and medical record analysis. The objective of this study is to evaluate the proficiency and accuracy of ChatGPT in different domains of clinical pharmacy cases and queries. METHODS The study NAPLEX® Review Questions, 4th edition, pertaining to 10 different chronic conditions compared ChatGPT's responses to pharmacotherapy cases and questions obtained from McGraw Hill's, alongside the answers provided by the book's authors. The proportion of correct responses was collected and analyzed using the Statistical Package for the Social Sciences (SPSS) version 29. RESULTS When tested in English, ChatGPT had substantially higher mean scores than when tested in Turkish. The average accurate score for English and Turkish was 0.41 ± 0.49 and 0.32 ± 0.46, respectively, p = 0.18. Responses to queries beginning with "Which of the following is correct?" are considerably more precise than those beginning with "Mark all the incorrect answers?" 0.66 ± 0.47 as opposed to 0.16 ± 0.36; p = 0.01 in English language and 0.50 ± 0.50 as opposed to 0.14 ± 0.34; p < 0.05in Turkish language. CONCLUSION ChatGPT displayed a moderate level of accuracy while responding to English inquiries, but it displayed a slight level of accuracy when responding to Turkish inquiries, contingent upon the question format. Improving the accuracy of ChatGPT in languages other than English requires the incorporation of several components. The integration of the English version of ChatGPT into clinical practice has the potential to improve the effectiveness, precision, and standard of patient care provision by supplementing personal expertise and professional judgment. However, it is crucial to utilize technology as an adjunct and not a replacement for human decision-making and critical thinking.
Collapse
Affiliation(s)
- Nur Ozturk
- Altinbas University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey; Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey.
| | - Irem Yakak
- Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey.
| | - Melih Buğra Ağ
- Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey; Istanbul Medipol University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey.
| | - Nilay Aksoy
- Altinbas University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey.
| |
Collapse
|
16
|
Fournier A, Fallet C, Sadeghipour F, Perrottet N. Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions. ANNALES PHARMACEUTIQUES FRANÇAISES 2024; 82:507-513. [PMID: 37992892 DOI: 10.1016/j.pharma.2023.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/16/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
OBJECTIVES Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. MATERIAL AND METHODS ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. RESULTS Documentation-related issues (n=36) and drug administration mode (n=30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n=38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. CONCLUSIONS ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice.
Collapse
Affiliation(s)
- A Fournier
- Service of Pharmacy, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - C Fallet
- Service of Pharmacy, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - F Sadeghipour
- Service of Pharmacy, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Geneva, Switzerland; Center for Research and Innovation in Clinical Pharmaceutical Sciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - N Perrottet
- Service of Pharmacy, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Geneva, Switzerland.
| |
Collapse
|
17
|
Bazzari FH, Bazzari AH. Utilizing ChatGPT in Telepharmacy. Cureus 2024; 16:e52365. [PMID: 38230387 PMCID: PMC10790595 DOI: 10.7759/cureus.52365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND ChatGPT is an artificial intelligence-powered chatbot that has demonstrated capabilities in numerous fields, including medical and healthcare sciences. This study evaluates the potential for ChatGPT application in telepharmacy, the delivering of pharmaceutical care via means of telecommunications, through assessing its interactions, adherence to instructions, and ability to role-play as a pharmacist while handling a series of life-like scenario questions. METHODS Two versions (ChatGPT 3.5 and 4.0, OpenAI) were assessed using two independent trials each. ChatGPT was instructed to act as a pharmacist and answer patient inquiries, followed by a set of 20 assessment questions. Then, ChatGPT was instructed to stop its act, provide feedback and list its sources for drug information. The responses to the assessment questions were evaluated in terms of accuracy, precision and clarity using a 4-point Likert-like scale. RESULTS ChatGPT demonstrated the ability to follow detailed instructions, role-play as a pharmacist, and appropriately handle all questions. ChatGPT was able to understand case details, recognize generic and brand drug names, identify drug side effects, interactions, prescription requirements and precautions, and provide proper point-by-point instructions regarding administration, dosing, storage and disposal. The overall means of pooled scores were 3.425 (0.712) and 3.7 (0.61) for ChatGPT 3.5 and 4.0, respectively. The rank distribution of scores was not significantly different (P>0.05). None of the answers could be considered directly harmful or labeled as entirely or mostly incorrect, and most point deductions were due to other factors such as indecisiveness, adding immaterial information, missing certain considerations, or partial unclarity. The answers were similar in length across trials and appropriately concise. ChatGPT 4.0 showed superior performance, higher consistency, better character adherence and the ability to report various reliable information sources. However, it only allowed an input of 40 questions every three hours and provided inaccurate feedback regarding the number of assessed patients, compared to 3.5 which allowed unlimited input but was unable to provide feedback. CONCLUSIONS Integrating ChatGPT in telepharmacy holds promising potential; however, a number of drawbacks are to be overcome in order to function effectively.
Collapse
Affiliation(s)
| | - Amjad H Bazzari
- Basic Scientific Sciences, Applied Science Private University, Amman, JOR
| |
Collapse
|
18
|
Zawiah M, Al-Ashwal FY, Gharaibeh L, Abu Farha R, Alzoubi KH, Abu Hammour K, Qasim QA, Abrah F. ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students. J Multidiscip Healthc 2023; 16:4099-4110. [PMID: 38116306 PMCID: PMC10729768 DOI: 10.2147/jmdh.s439223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023] Open
Abstract
Background The emergence of Chat-Generative Pre-trained Transformer (ChatGPT) by OpenAI has revolutionized AI technology, demonstrating significant potential in healthcare and pharmaceutical education, yet its real-world applicability in clinical training warrants further investigation. Methods A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses. Results The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant. Conclusion Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.
Collapse
Affiliation(s)
- Mohammed Zawiah
- Department of Clinical Pharmacy, College of Pharmacy, Northern Border University, Rafha, 91911, Saudi Arabia
- Department of Pharmacy Practice, College of Clinical Pharmacy, Hodeidah University, Al Hodeidah, Yemen
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Lobna Gharaibeh
- Pharmacological and Diagnostic Research Center, Faculty of Pharmacy, Al-Ahliyya Amman University, Amman, Jordan
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, Amman, Jordan
| | - Karem H Alzoubi
- Department of Pharmacy Practice and Pharmacotherapeutics, University of Sharjah, Sharjah, 27272, United Arab Emirates
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Qutaiba A Qasim
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Fahd Abrah
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| |
Collapse
|