Published online Sep 20, 2023. doi: 10.5662/wjm.v13.i4.170
Peer-review started: May 2, 2023
First decision: June 21, 2023
Revised: June 29, 2023
Accepted: July 24, 2023
Article in press: July 24, 2023
Published online: September 20, 2023
Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as dis
Core Tip: Chat Generative Pre-trained Transformer's potential applications span medicine, education, and scientific research, with a focus on addressing its limitations and ethical concerns. Emphasizes the importance of developing guidelines and fostering collaboration between artificial intelligence (AI) developers, tech companies, researchers, and academic publishers. The key areas for future advancements, such as enhancing AI capabilities, addressing biases, and promoting the ethical use of AI tools have been outlined.
- Citation: Jeyaraman M, Ramasubramanian S, Balaji S, Jeyaraman N, Nallakumarasamy A, Sharma S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J Methodol 2023; 13(4): 170-178
- URL: https://www.wjgnet.com/2222-0682/full/v13/i4/170.htm
- DOI: https://dx.doi.org/10.5662/wjm.v13.i4.170
“Chat Generative Pre-trained Transformer (ChatGPT) is an advanced artificial intelligence (AI) language model developed by OpenAI, based on the Generative Pre-trained Transformer (GPT)-4 architecture. Stemming from the original GPT model inspired by the 2017 Transformer architecture, successive iterations culminated in ChatGPT, which generates human-like text and performs tasks like text completion, summarization, and translation. Despite its remarkable capabilities, ChatGPT faces limitations such as sensitivity to input phrasing, plausible yet incorrect responses, and inconsistent world knowledge. Ongoing research aims to enhance the model's performance and address these challenges”. That is an exemplary introduction to the topic of this paper, offered by ChatGPT when asked to explain its nature and historical development within a 100-word limit. It serves as an apt teaser to the scope of capabilities of the system. Within a week of its release, ChatGPT reached one million users, reflecting its wide acceptance and potential impact[1-4]. The model has evolved from GPT-1 through GPT-3, with the latter using 45 TB of data to enhance its ability to generate coherent sequences of words, code, and data. The latest iteration, GPT-4, was released on 14 March 2023. However, it still faces limitations such as the inability to consistently answer semantic, factual, and ethical questions, as well as the presence of biases and prejudices[5-8]. Despite these drawbacks, early reactions to ChatGPT have been largely positive, with users acknowledging its transformative potential across various domains. This review article provides an overview of the current state of ChatGPT, discussing its applications, challenges, and potential impact. Additionally, we will explore possible measures to incorporate ChatGPT into the healthcare and scientific realms, ensuring its utility is maximized while mitigating risks.
AI technology, particularly large language models (LLMs) such as ChatGPT, has demonstrated remarkable capabilities in generating human-like text, answering questions, and providing explanations[9,10]. These advancements have significantly impacted various fields, including medicine[11-13], law[14,15], and academia[9,16-18]. ChatGPT has the potential to revolutionize science by speeding up the article writing and editing process, allowing researchers to focus on their research[19-22]. In education, AI chatbots like ChatGPT can offer students individualized learning experiences, assistance in learning new languages, tutoring and homework help, and answers to challenging questions.
AI systems, such as ChatGPT, have shown potential in various fields, including medical education, clinical decision support, and genetics[9,12,13,15,16]. For instance, ChatGPT has been evaluated in the context of the American Heart Association Basic Life Support and Advanced Cardiovascular Life Support exams, performing reasonably well overall, with accuracy varying depending on the question type[15,24]. In another study, ChatGPT was compared to human respondents in addressing genetics questions, showing similar performance but excelling in memorization-style questions[12,13]. Additionally, ChatGPT performed well on standardized exams such as AMBOSS-Step1, AMBOSS-Step2, NBME-Free-Step1, and NBME-Free-Step2 datasets[12,13].
However, it is essential to acknowledge the significance of human involvement in scientific research, as humans are responsible for posing hypotheses, designing experiments, and interpreting results. Thorp emphasizes the role of machines as tools but highlights that the scientific record ultimately results from human endeavors grappling with critical questions. Furthermore, OpenAI and DeepMind have developed AI systems, ChatGPT and AlphaCode, capable of producing lines of code. Although these systems have the potential to automate certain tasks in large software engineering projects, understanding human needs can be challenging, as these needs may be difficult to describe with machine-readable specifications.
Limitations and potential misuse of AI raise concerns about their impact on human intelligence[23,26,27]. Neural network-based LLMs may pose a challenge for scientific thinking, as they are trained on past information and may struggle to think differently from the past, potentially hindering social and scientific progress. Additionally, biases and inaccuracies may arise from the quality of training datasets. Irene Solaiman, a researcher of the social impact of AI at Hugging Face, has expressed concerns about relying on these models for scientific thinking.
As AI systems continue to evolve, it is crucial to strike a balance between AI advancements and human intervention. AI technology can enhance various aspects of human life, but the significance of human involvement in tasks such as posing hypotheses, designing experiments, and interpreting results should not be overlooked. AI systems can be viewed as tools that complement and extend human intelligence rather than replacing it. While AI has the potential to augment human intelligence, it is unlikely to completely take over, as human creativity, critical thinking, and adaptability remain indispensable in various aspects of life and scientific research.
ChatGPT has demonstrated significant potential in various healthcare-related applications, such as medical education, radiologic decision-making, clinical genetics, and patient care[9,12-16]. Its use in medical education has shown promise as an interactive tool to support learning and problem-solving. ChatGPT performed at or near the passing threshold for all three United States Medical Licensing Exam (USMLE) exams and demonstrated a high level of concordance and insight in its explanations without any specialized training or reinforcement.
ChatGPT has potential applications in healthcare education, research, and practice, such as improving scientific writing, enhancing research equity and versatility, streamlining workflow, saving time, and improving health literacy[16,21,29]. In the context of nursing education, O'Connor suggests using a variety of assessment methods to reduce the risk of automated answers in students' written work, while emphasizing the importance of educating students about academic integrity and the value of critical thinking and scientific writing. AI chatbots like ChatGPT have the potential to be valuable tools for tailored learning experiences in education, providing personalized learning experiences for students, assisting with language acquisition, providing homework help and tutoring, and answering questions to aid in understanding complex concepts. A publication discusses how doctors can use ChatGPT to write medical summaries for patients, emphasizing the importance of considering patient acceptance of new technology and its potential negative effects. However, concerns have been raised about the lack of transparency and accountability in AI-generated content.
Research by Rao et al suggests that specialized AI-based clinical decision-making tools will emerge in the future, highlighting the potential of using ChatGPT for radiologic decision-making and finding it feasible and potentially beneficial for improving clinical workflow and responsible use of radiology services. Another paper by Kung et al concludes that LLMs, such as ChatGPT, have the potential to enhance the delivery of individualized, compassionate, and scalable healthcare by assisting with medical education and potentially clinical decision-making. In the field of clinical genetics, Duong and Solomon found that ChatGPT did not significantly differ from humans overall in answering genetics questions, but performed better on memorization-type questions than critical thinking questions. The study also revealed that ChatGPT frequently provided different answers when asked the same question multiple times, with plausible explanations for both correct and incorrect answers. Fijačko et al tested ChatGPT's accuracy in answering questions related to life support and resuscitation, revealing that it could provide accurate answers to a majority of the questions on the American Heart Association's Basic Life Support and Advanced Cardiovascular Life Support exams. Furthermore, ChatGPT may assist with individualized, compassionate, and scalable healthcare delivery.
In neurosurgical research and patient care, ChatGPT has been explored for its potential role in gathering patient data, administering surveys or questionnaires, and providing information about care and treatment. However, the implementation of these technologies should be approached carefully to ensure effectiveness and safety. The potential of combining biotechnology and AI to tackle global issues and advance sustainable development goals is examined in a paper, which covers the wide range of AI applications in the life sciences, including decision support, natural language processing, data mining, and machine learning. The authors emphasize the value of reproducibility in the development of AI models and highlight current research issues and challenges in these fields. Another article explores the use of computational systems biology in stem cell research, emphasizing the value of computational methods in understanding the intricate biological mechanisms underlying stem cell differentiation and regeneration. The article also highlights the importance of interdisciplinary partnerships between computational and experimental biologists to advance stem cell research. Furthermore, it discusses the application of machine learning and deep learning algorithms in stem cell research, serving as an example of how computational systems biology can advance our knowledge of stem cells and their potential therapeutic uses.
AI-powered chatbots, like ChatGPT, have the potential to improve patient outcomes by facilitating communication between patients and healthcare professionals, informing patients about their care and treatment using natural language processing. In the context of neurosurgical research, chatbots can speed up the process of data collection and analysis by gathering information from a large number of patients and may be useful for longitudinal studies, which must track patient outcomes over time. In a study that investigates various ChatGPT prompting strategies for breast cancer screening and breast pain, it was suggested that ChatGPT can be used to help radiologists choose the best imaging modalities, potentially enhancing clinical workflow and encouraging the prudent use of radiology services. An article covering the third year of the coronavirus disease-19 pandemic, along with various topics like computer-based testing (CBT), study design, ChatGPT, journal metrics, and appreciation for reviewers, emphasizes the benefits of CBT and suggests adding more multimedia test items to better represent real-world clinical scenarios. The significance of the study design and the application of ChatGPT for data analysis are also discussed in the article.
ChatGPT has demonstrated significant potential in various healthcare-related applications, ranging from medical education to clinical decision-making and patient care. Its implementation in these fields has yielded notable results, but it is essential to approach the integration of such technologies carefully to ensure effectiveness, safety, and ethical considerations.
The use of AI models such as ChatGPT in medical education, decision-making, scientific writing, and research has demonstrated their capabilities and potential benefits[9,12,13,15,21,22,29]. However, their utilization comes with potential risks and challenges, including ethical, copyright, transparency, legal issues, and concerns related to the generation of content difficult to distinguish from human-generated content[16,26]. Moreover, there are issues of bias, plagiarism, lack of originality, inaccurate content, and incorrect citations[16,34]. Several articles emphasize the importance of transparency, integrity, and truth in scientific research and highlight the potential risks associated with the use of AI tools like ChatGPT[20,26]. To address these concerns, the development of appropriate guidelines, regulations, and technologies for detecting AI-generated outputs and ensuring the safe and responsible use of ChatGPT and other LLMs in healthcare, academia, and research has been proposed[16,26,27,34]. The academic community must address these challenges by investing in the development of robust quality control measures for AI-generated content, including stringent human supervision, validation of generated content, and ensuring that AI-generated outputs meet high standards of accuracy, originality, and integrity[29,34]. In addition, funders, publishers, and research institutions should adopt clear policies to encourage openness and public understanding of the use of AI-generated content.
Several key issues need to be discussed, including the potential for AI-generated content to be used unethically, the need for transparency and honesty, the risk of manipulating public opinion or decision-making, the necessity of policies and guidelines, and the potential for AI-generated content to transform research practices and publication. Furthermore, academic publishers should engage in discussions about the implications of AI-generated content and create comprehensive guidelines for publishing such content. Notably, according to recent articles, it can be difficult for journal editors to recognize and reject papers written by AI because these models can often produce texts that look very similar to those written by humans. The use of GPT-3 as a co-author in a study published in the journal Oncoscience has sparked a debate about authorship conventions in the context of AI-generated content. While some experts argue that AI shouldn't be given credit for writing, others highlight their role in generating ideas and producing papers. This ongoing discussion may lead to changes in authorship conventions in the future.
The rise of AI models like ChatGPT presents both opportunities and challenges in the fields of medical education, scientific writing, and research. Addressing the ethical, legal, and quality concerns associated with AI-generated content requires the collective efforts of the academic community, publishers, and AI developers. By proactively establishing guidelines, regulations, and policies, the potential benefits of AI-generated content can be realized while minimizing the risks associated with its misuse.
ChatGPT, a powerful AI tool with potential applications across various fields, has been the subject of numerous studies and discussions. Despite its promise in reducing the time required for tasks, it has several limitations and ethical implications that need to be considered, particularly in sensitive fields like healthcare and education[11,20,22,28-30,36]. One primary concern is the accuracy of the information generated by ChatGPT, as it heavily depends on the quality of its training data[23,37]. The risk of biased or misleading results due to poor-quality datasets is particularly relevant in fields like medical education and clinical decision-making, which require high levels of precision[12,13,24]. Such inaccuracies may negatively impact patient outcomes and damage the reputation of the medical community, while in the realm of education, they could lead students astray and impede learning[25,26,29,34]. O'Connor warns against relying on automated tools like ChatGPT to detect plagiarism in nursing education, suggesting a combination of assessment methods, including oral presentations and objective structured clinical examinations, along with smaller pieces of scientific writing. This approach would reduce the risk of automated answers in students' work and emphasize the importance of academic integrity, critical thinking, and scientific writing skills. Additionally, other papers discourage the use of ChatGPT or similar technologies for writing research articles or cheating on assignments[10,26]. Another limitation of ChatGPT is its inability to perform well on tasks requiring critical thinking or reasoning[15,36]. Duong and Solomon acknowledges that the model's performance may not generalize to all types of genetics questions or contexts, as it performs better on memorization-type questions than on critical thinking ones. This indicates that relying solely on AI models like ChatGPT may not be sufficient for addressing complex tasks in various fields, including healthcare and education. Moreover, ChatGPT is prone to generating fake references and citations, a phenomenon referred to as "hallucination" or "stochastic parroting". This poses a significant challenge for journal editors, as the output may contain fabricated information, undermining the credibility of scientific research. The potential misuse of ChatGPT for plagiarism also raises concerns[10,16,23,26,36], which could disrupt traditional methods of assigning essays and lead to a decline in academic integrity.
In addition to the aforementioned concerns, the utilization of ChatGPT poses several other challenges, including the dangers of blind trust, excessive regulation, dehumanization, misaligned optimization targets, information overload, false forecasting, and the need for self-reference monitoring[38-40]. A study found that there are worries about the possible dangers of using GPT-3 for online radicalization and extremism. The model can be manipulated to amplify extremist ideologies, generate polemics reminiscent of past attackers, and propagate harmful beliefs. Efforts should be made to understand and mitigate these risks effectively. Another study emphasizes the need to handle privacy and Personally Identifiable Information retention issues when employing AI-powered chatbots, especially in delicate situations like summarising cover letters. It underlines the requirement for responsible AI use and the creation of privacy protection measures. Moreover, language models like ChatGPT can give rise to various detrimental outcomes, such as discrimination, toxicity, information hazards, misinformation propagation, malicious exploitation, issues with human-computer interaction, and environmental impacts[39,40]. These challenges underscore the significance of responsible usage and achieving a balance between regulation and innovation.
The integration of AI-generated content raises questions about authenticity, accountability, privacy, and security[1,18]. As a result, there is an urgent need for guidelines and regulations to ensure the safe and responsible use of ChatGPT and other LLMs. Chatterjee and Dethlefs emphasizes the responsibility of tech companies like OpenAI to provide solutions to manage potential misuse and ensure the ethical use of powerful AI models. The Lancet article also calls for investment in detecting problematic outputs and establishing editorial policies that keep up with evolving technology to ensure the safe and ethical use of AI tools like ChatGPT in scholarly publishing. The reliance on LLMs like ChatGPT for scientific thinking may impede social and scientific progress, as these models are trained on past information and may not be able to think differently from the past. Researchers and academics should remain vigilant in their use of AI tools, emphasizing human involvement in hypothesis formulation, experimental design, and interpretation of results to ensure that the scientific record remains a product of human endeavor in grappling with critical questions.
Other key ethical issues brought up by the use of ChatGPT include those bias, authorship, privacy and security, transparency, abuse, and privacy and security. One major concern is the potential for bias in ChatGPT's responses, stemming from biases present in the training data, model bias, and non-representative data labelers. This bias can perpetuate and amplify societal biases, leading to unfair and discriminatory outcomes. Privacy and security are also significant issues, as ChatGPT collects data during training, including potentially sensitive personal information, and user interactions with the system may inadvertently disclose personal details, posing risks if obtained by malicious entities. Another ethical concern is the lack of transparency in ChatGPT's decision-making process and the limited disclosure of technical details by OpenAI, making it challenging for users to have control over the generated content and understand the model's limitations. Furthermore, there are concerns regarding the potential for abuse, including the spread of misinformation and the impersonation of individuals using ChatGPT[38,40].
A review of the primary literature on AI-assisted psychosis risk screening in adolescents focuses on two specific methods: chatbot-based screening and analysis of large-scale social media data. The authors highlight ethical issues as the primary challenge in utilizing AI for psychiatric risk screening. They emphasize the need for compliance with the biomedical ethical principles of respect for autonomy, non-maleficence, beneficence, and impartiality. A different study on ChatGPT's moral authority shows that it has the potential to enhance users' moral judgement, but also emphasizes its inconsistency and capability to taint judgement. Users underestimate its power, making responsible use and instruction in digital literacy necessary to effectively understand its recommendations. Another paper, on the other hand, offers proof that language models developed using reinforcement learning from human feedback (RLHF) are capable of "moral self-correction" by avoiding damaging outputs when specifically told to do so. This ability, which involves the models' capacity for comprehension and instruction following, arises with larger models trained with RLHF. The results point to the possibility of teaching language models to follow moral standards, providing cause for cautious hope. A review of the primary literature on AI-assisted psychosis risk screening in adolescents focuses on two specific methods: Chatbot-based screening and analysis of large-scale social media data. The authors highlight ethical issues as the primary challenge in utilizing AI for psychiatric risk screening. They emphasize the need for compliance with the biomedical ethical principles of respect for autonomy, non-maleficence, beneficence, and impartiality.
While ChatGPT offers numerous benefits, its limitations and ethical implications must be carefully considered. Publishers, funders, and researchers must establish clear rules and regulations regarding the use of ChatGPT and similar tools in academic research[11,20,22,28-30,36,38-40]. For instance, in healthcare, AI-generated summaries using patient data should be manually reviewed by a healthcare professional before use, taking into account patient acceptability and potential consequences of technology failure. Transparency and accountability should be prioritized by researchers and developers by disclosing data on bias, privacy, and system constraints. Vulnerable users should be given extra protection, and it's crucial to communicate clearly about ChatGPT's capabilities and restrictions. ChatGPT responses should be justified and given only in response to specific requests. The accuracy of ChatGPT's responses can be improved by including domain-specific knowledge through expert-curated data. At the same time, users must validate information, tell facts from fiction, comprehend ramifications, communicate clearly, and be aware of terms and conditions. Regulators and politicians should strive for a balanced strategy that steers clear of overregulation and prevents information and communication concentration. To handle complex ethical issues, ethicists need to work in conjunction with specialists from a variety of professions. It is advised, in terms of regulation policy, to put measures in place that stop the abuse of ChatGPT in academic settings, such as using different exam formats and applying AI content detectors for plagiarism detection. The issue of including ChatGPT as a co-author in academic works should be addressed with clear norms and disclosure requirements. It's important to note that ChatGPT-generated content isn't protected by copyright. The development of risk assessment frameworks and tools to evaluate the potential risks connected to language models should be an area of focus of future studies. It is necessary to incorporate disciplines like ethnographic research and human-computer interaction into the methodological toolbox for language model analysis. Furthermore, to create efficient plans for resolving identified hazards, technological and sociotechnical mitigation research is required. To establish normative performance levels, benchmarking initiatives should be conducted with input from all participants. To fully understand the possible advantages and overall social impact of language models, a thorough investigation should be done. Overall, a cautious and well-considered approach is necessary for the integration of AI tools like ChatGPT, with an emphasis on maintaining human involvement in critical aspects of research, assessment, and decision-making processes. By doing so, we can ensure the responsible and ethical use of AI while leveraging its benefits in various fields.
The ChatGPT has demonstrated remarkable potential in various fields, including medicine, education, law, and scientific research[9,11,15,16,18]. This large language model exhibits proficiency in answering questions, producing well-referenced writing, assisting in drafting papers, and providing support for clinical decision-making[9,12,13,15,21]. However, it is important to recognize the limitations and potential risks of AI, as human intelligence, creativity, and critical thinking remain essential components of scientific inquiry and progress. AI tools should be viewed as complementary rather than a replacement for human expertise. In medical education and clinical decision-making, ChatGPT has been found to perform at or near the passing threshold for USMLE, highlighting its potential as an interactive medical education tool and in assisting with radiologic decision-making, streamlining clinical workflow, and improving responsible use of radiology services. Nevertheless, its application in these fields should be approached with care, acknowledging its limitations, the risk of biased or misleading results, and the importance of human involvement in the decision-making process[23,25,29].
In scientific publishing, ChatGPT has demonstrated the capacity to accelerate the writing and editing process. However, concerns about the authenticity and accountability of AI-generated content have been raised[30,34]. Researchers suggest that academic publishers engage in discussions about the implications of AI-generated content and establish comprehensive guidelines for publishing such content. Moreover, some studies emphasize the need for collaboration in preventing potential misuse of AI models like ChatGPT and call for tech companies to take responsibility for managing potential misuse. An article published in Nature highlights the issues that require attention, such as the tasks that should be delegated to LLMs, the qualifications and skills necessary for researchers, the stages of the AI-assisted research process that need human verification, and the laws required to deal with LLMs. There should also be discussions on how LLMs can be used to educate and train researchers, how researchers and funders can encourage the development of independent, open-source LLMs, and what standards of quality should be expected of LLMs. Further topics include how LLMs can advance open science principles and research equity, as well as the legal ramifications LLMs may have on scientific practice.
The development of ChatGPT's ability to interact with humans naturally and ask follow-up questions to reduce bias holds great promise for its future applications. By using reinforcement learning to optimize the model, ChatGPT becomes more robust and capable of sustaining longer conversations with users. The potential of ChatGPT extends beyond the examples mentioned in the paper, including prospects for its use in various industries such as customer service, education, and mental health.
As AI technologies continue to advance, they have the potential to revolutionize numerous fields, including healthcare, education, and scientific research. However, there are several key considerations for future developments, as highlighted by the cited research: (1) Enhancing AI capabilities: Future AI models should aim to improve their performance on critical thinking tasks, as well as memorization-type questions, to be more effective in assisting human experts in various fields; (2) Addressing biases: AI developers should work to minimize biases in AI systems, as these biases can limit the effectiveness and ethical use of AI tools; (3) Ensuring ethical use: Tech companies and researchers must prioritize the ethical use of AI tools and develop strategies to prevent misuse. This includes investing in methods to detect problematic outputs and establishing editorial policies that can adapt to evolving technology; and (4) Fostering collaboration: The integration of AI and human expertise should be encouraged, promoting a collaborative approach that leverages the strengths of both AI and human intelligence. The challenges of ChatGPT is depicted in Figure 1.
By focusing on these key areas, future advancements in AI can help address current limitations and ensure that the technology is employed responsibly and ethically. Through this approach, AI can be harnessed to improve healthcare, education, and scientific research, all while maintaining the importance of human intelligence and critical thinking.
ChatGPT has shown significant potential in revolutionizing various fields, including science, healthcare, and education, by accelerating processes, enhancing personalization, and providing valuable support to professionals and learners alike. Despite its capabilities, it is important to recognize that ChatGPT is not a substitute for human intelligence and its use comes with an array of ethical, legal, and quality-related challenges that need to be addressed to harness its full potential. Establishing clear guidelines and usage policies is essential to ensure the responsible integration of ChatGPT in academic and professional settings. This includes maintaining transparency in AI-generated content, acknowledging the potential for misinformation and plagiarism, and promoting adherence to quality standards. Furthermore, as AI systems like ChatGPT continue to advance, continuous research, interdisciplinary collaboration, and dialogue among stakeholders are crucial in addressing the limitations, risks, and ethical implications of this emerging technology.
In the healthcare sector, for instance, striking a balance between the benefits of AI assistance and the potential risks associated with misinformation is of utmost importance. Close monitoring, human verification, and careful consideration of patient acceptability are necessary to mitigate these risks. Similarly, in education, it is essential to maintain academic integrity and discourage any unethical use of ChatGPT while exploring its potential for personalized learning ex
While ChatGPT holds great promise in transforming various industries and enhancing research and learning experiences, it is essential to adopt a cautious and responsible approach to its integration. By fostering interdisciplinary collaboration, ongoing research, and proactive policy development, the research community can ensure that conversational AI is utilized ethically, effectively, and responsibly, paving the way for innovative applications that positively impact society.
Provenance and peer review: Invited article; Externally peer reviewed.
Peer-review model: Single blind
Specialty type: Medical laboratory technology
Country/Territory of origin: India
Peer-review report’s scientific quality classification
Grade A (Excellent): 0
Grade B (Very good): B
Grade C (Good): C
Grade D (Fair): 0
Grade E (Poor): 0
P-Reviewer: Liu XQ, China; Shahria MT, United States S-Editor: Lin C L-Editor: A P-Editor: Xu ZH
|1.||Taecharungroj V. “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn Comput 2023; 7: 35. [DOI] [Cited in This Article: ]|
|2.||OpenAI. GPT-4 Technical Report. 2023 Preprint. Available from: bioRxiv:2303.08774. [DOI] [Cited in This Article: ]|
|3.||Dale R. GPT-3: What’s it good for? Nat Lang Eng. 2021;27:113-118. [DOI] [Cited in This Article: ]|
|4.||Floridi L, Chiriatti M. GPT-3: Its Nature, Scope, Limits, and Consequences. MIind Mach. 2020;30:681-694. [DOI] [Cited in This Article: ] [Cited by in Crossref: 120] [Cited by in F6Publishing: 41] [Article Influence: 13.7] [Reference Citation Analysis (0)]|
|5.||Lee JS, Hsiang J. Patent Claim Generation by Fine-Tuning OpenAI GPT-2. 2019 Preprint. Available from: bioRxiv:1907.02052. [DOI] [Cited in This Article: ]|
|6.||Nath S, Marie A, Ellershaw S, Korot E, Keane PA. New meaning for NLP: the trials and tribulations of natural language processing with GPT-3 in ophthalmology. Br J Ophthalmol. 2022;106:889-892. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 5] [Reference Citation Analysis (0)]|
|7.||Chintagunta B, Katariya N, Amatriain X, Kannan A. Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization. In: Shivade C, Gangadharaiah R, Gella S, Konam S, Yuan S, Zhang Y, Bhatia P, Wallace P. Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations. 2021; Online. Association for Computational Linguistics, 2021: 66–76. [Cited in This Article: ]|
|8.||Wang S, Liu Y, Xu Y, Zhu C, Zeng M. Want To Reduce Labeling Cost? GPT-3 Can Help. 2021 Preprint. Available from: bioRxiv:2108.13487. [DOI] [Cited in This Article: ]|
|9.||Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ. 2023;9:e45312. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 312] [Cited by in F6Publishing: 191] [Article Influence: 191.0] [Reference Citation Analysis (0)]|
|10.||King MR; chatGPT. A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education. Cell Mol Bioeng. 2023;16:1-2. [PubMed] [DOI] [Cited in This Article: ] [Reference Citation Analysis (0)]|
|11.||Castelvecchi D. Are ChatGPT and AlphaCode going to replace programmers? Nature. 2022;. [PubMed] [DOI] [Cited in This Article: ] [Reference Citation Analysis (0)]|
|12.||Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making. medRxiv. 2023;. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 46] [Reference Citation Analysis (0)]|
|13.||Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, Madriaga M, Aggabao R, Diaz-Candido G, Maningo J, Tseng V. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 4] [Cited by in F6Publishing: 2] [Article Influence: 2.0] [Reference Citation Analysis (0)]|
|14.||D'Amico RS, White TG, Shah HA, Langer DJ. I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots Into Neurosurgical Research and Patient Care…. Neurosurgery. 2023;92:663-664. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 35] [Cited by in F6Publishing: 23] [Article Influence: 23.0] [Reference Citation Analysis (0)]|
|15.||Duong D, Solomon BD. Analysis of large-language model vs human performance for genetics questions. Eur J Hum Genet. 2023;. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 7] [Reference Citation Analysis (0)]|
|16.||Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023;11. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 299] [Cited by in F6Publishing: 189] [Article Influence: 189.0] [Reference Citation Analysis (0)]|
|17.||Patel SB, Lam K. ChatGPT: the future of discharge summaries? Lancet Digit Health. 2023;5:e107-e108. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 3] [Cited by in F6Publishing: 1] [Article Influence: 1.0] [Reference Citation Analysis (0)]|
|18.||Gandhi P, Talwar V. Artificial intelligence and ChatGPT in the legal context. Indian J Med Sci. 2023;75:1-2. [DOI] [Cited in This Article: ]|
|19.||Gordijn B, Have H ten. ChatGPT: evolution or revolution? Med Health Care and Philos. 2023;26:1-2. [DOI] [Cited in This Article: ] [Cited by in Crossref: 26] [Cited by in F6Publishing: 44] [Article Influence: 44.0] [Reference Citation Analysis (0)]|
|20.||van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023;614:224-226. [PubMed] [DOI] [Cited in This Article: ] [Cited by in F6Publishing: 152] [Reference Citation Analysis (0)]|
|21.||Macdonald C, Adeloye D, Sheikh A, Rudan I. Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J Glob Health. 2023;13:01003. [PubMed] [DOI] [Cited in This Article: ] [Reference Citation Analysis (0)]|
|22.||Curtis N; ChatGPT. To ChatGPT or not to ChatGPT? The Impact of Artificial Intelligence on Academic Publishing. Pediatr Infect Dis J. 2023;42:275. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 38] [Reference Citation Analysis (0)]|
|23.||O'Connor S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract. 2023;66:103537. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 3] [Cited by in F6Publishing: 3] [Article Influence: 3.0] [Reference Citation Analysis (0)]|
|24.||Fijačko N, Gosak L, Štiglic G, Picard CT, John Douma M. Can ChatGPT pass the life support exams without entering the American heart association course? Resuscitation. 2023;185:109732. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 3] [Cited by in F6Publishing: 1] [Article Influence: 1.0] [Reference Citation Analysis (0)]|
|25.||Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379:313. [PubMed] [DOI] [Cited in This Article: ] [Cited by in F6Publishing: 167] [Reference Citation Analysis (0)]|
|26.||Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 24] [Cited by in F6Publishing: 6] [Article Influence: 6.0] [Reference Citation Analysis (0)]|
|27.||Chatterjee J, Dethlefs N. This new conversational AI model can be your friend, philosopher, and guide ... and even your worst enemy. Patterns (N Y). 2023;4:100676. [PubMed] [DOI] [Cited in This Article: ] [Reference Citation Analysis (0)]|
|28.||Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613:423. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 12] [Cited by in F6Publishing: 127] [Article Influence: 127.0] [Reference Citation Analysis (0)]|
|29.||Lubowitz JH. ChatGPT, An Artificial Intelligence Chatbot, Is Impacting Medical Literature. Arthroscopy. 2023;39:1121-1122. [PubMed] [DOI] [Cited in This Article: ] [Reference Citation Analysis (0)]|
|30.||Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5:e105-e106. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 5] [Cited by in F6Publishing: 70] [Article Influence: 70.0] [Reference Citation Analysis (0)]|
|31.||Holzinger A, Keiblinger K, Holub P, Zatloukal K, Müller H. AI for life: Trends in artificial intelligence for biotechnology. N Biotechnol. 2023;74:16-24. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 1] [Reference Citation Analysis (0)]|
|32.||Cahan P, Treutlein B. A conversation with ChatGPT on the role of computational systems biology in stem cell research. Stem Cell Reports. 2023;18:1-2. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 1.0] [Reference Citation Analysis (0)]|
|33.||Huh S. Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers. J Educ Eval Health Prof. 2023;20:5. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 4] [Cited by in F6Publishing: 3] [Article Influence: 3.0] [Reference Citation Analysis (0)]|
|34.||The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit Health. 2023;5:e102. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 3] [Cited by in F6Publishing: 58] [Article Influence: 58.0] [Reference Citation Analysis (0)]|
|35.||Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620-621. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 19] [Cited by in F6Publishing: 11] [Article Influence: 11.0] [Reference Citation Analysis (0)]|
|36.||Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. 2022;. [PubMed] [DOI] [Cited in This Article: ] [Cited by in F6Publishing: 66] [Reference Citation Analysis (0)]|
|37.||Mogali SR. Initial impressions of ChatGPT for anatomy education. Anat Sci Educ. 2023;. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 6] [Cited by in F6Publishing: 2] [Article Influence: 2.0] [Reference Citation Analysis (0)]|
|38.||Zhou J, Müller H, Holzinger A, Chen F. Ethical ChatGPT: Concerns, Challenges, and Commandments. 2023 Preprint. Available from: bioRxiv:2305.10646. [DOI] [Cited in This Article: ]|
|39.||Weidinger L, Mellor J, Rauh M, Griffin C, Uesato J, Huang P-S, Cheng M, Glaese M, Balle B, Kasirzadeh A, Kenton Z, Brown S, Hawkins W, Stepleton T, Biles C, Birhane A, Haas J, Rimell L, Hendricks LA, Isaac W, Legassick S, Irving G, Gabriel I. Ethical and social risks of harm from Language Models. 2021 Preprint. Available from: bioRxiv:2112.04359. [DOI] [Cited in This Article: ]|
|40.||Zhang C, Zhang C, Li C, Qiao Y, Zheng S, Dam SK, Zhang M, Kim JU, Kim ST, Choi J, Park G-M, Bae S-H, Lee L-H, Hui P, Kweon IS, Hong CS. One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era. 2023 Preprint. Available from: bioRxiv:2304.06488. [DOI] [Cited in This Article: ]|
|41.||McGuffie K, Newhouse A. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. arXiv.org. 2020 Preprint. Available from: bioRxiv:2009.06807v1. [DOI] [Cited in This Article: ]|
|42.||Priyanshu A, Vijay S, Kumar A, Naidu R, Mireshghallah F. Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization. 2023 Preprint. Available from: bioRxiv:2305.15008. [DOI] [Cited in This Article: ]|
|43.||Cao XJ, Liu XQ. Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World J Psychiatry. 2022;12:1287-1297. [PubMed] [DOI] [Cited in This Article: ] [Cited by in CrossRef: 9] [Cited by in F6Publishing: 8] [Article Influence: 8.0] [Reference Citation Analysis (8)]|
|44.||Krügel S, Ostermaier A, Uhl M. The moral authority of ChatGPT. 2023 Preprint. Available from: bioRxiv:2301.07098. [DOI] [Cited in This Article: ]|
|45.||Ganguli D, Askell A, Schiefer N, Liao TI, Lukošiūtė K, Chen A, Goldie A, Mirhoseini A, Olsson C, Hernandez D, Drain D, Li D, Tran-Johnson E, Perez E, Kernion J, Kerr J, Mueller J, Landau J, Ndousse K, Nguyen K, Lovitt L, Sellitto M, Elhage N, Mercado N, DasSarma N, Rausch O, Lasenby R, Larson R, Ringer S, Kundu S, Kadavath S, Johnston S, Kravec S, Showk SE, Lanham T, Telleen-Lawton T, Henighan T, Hume T, Bai Y, Hatfield-Dodds Z, Mann B, Amodei D, Joseph N, McCandlish S, Brown T, Olah C, Clark J, Bowman SR, Kaplan J. The Capacity for Moral Self-Correction in Large Language Models. 2023 Preprint. Available from: bioRxiv:2302.07459. [DOI] [Cited in This Article: ]|