Artificial intelligence (AI) models, such as ChatGPT (Chat Generative Pre-Trained Transformer), have gained prominence in various fields, including medical research and publishing. This comprehensive review article aims to explore the pros and cons associated with using ChatGPT in the medical domain. Through an in-depth analysis of relevant studies and publications, we highlight the potential benefits of ChatGPT, including improved efficiency, enhanced data analysis, and increased accessibility. However, we also address concerns regarding accuracy, ethical considerations, and the need for human oversight when incorporating ChatGPT into medical research and publishing workflows.
Artificial intelligence (AI) and natural language processing (NLP) models have revolutionized the way we interact with technology. ChatGPT, a prominent language model developed by OpenAI, which is able to generate new text that is similar to the text it was trained on. 1 It serves as a conversational AI system capable of generating human-like responses in text-based interactions. Built upon a vast corpus of diverse training data, ChatGPT leverages deep learning techniques to understand and generate natural language. Medical applications of ChatGPT encompass the creation of conversational agents capable of accessing and generating medical information from multiple sources and formats. 2 It has been designed to understand and respond to a wide range of topics and questions, making it a versatile tool for various applications, including medical research and publishing. 3 This review article examines the advantages and drawbacks associated with employing ChatGPT for publication in the medical field.
ChatGPT can assist medical researchers by automating certain tasks, such as literature review, summarization, and data extraction. With the exponential growth of medical literature, researchers often face the challenge of processing large volumes of information efficiently. By leveraging ChatGPT's capabilities, researchers can save time and allocate their efforts to more critical aspects of their work, such as hypothesis generation, and experimental design.
Several studies have demonstrated the potential of AI models, including ChatGPT, in improving efficiency in medical research and publishing. 4, 5, 6 For instance, AI-powered chatbots based on GPT have been developed to facilitate literature searching and summarization, which significantly reduces the time required for these tasks. 7 Similarly, AI models have been employed to triage and prioritize electronic health records, resulting in improved efficiency and patient care. 8
2.2. Enhanced Data AnalysisThrough its language generation capabilities, ChatGPT can aid in extracting relevant information from large volumes of medical literature, identifying patterns, and generating hypotheses. By leveraging the vast knowledge accumulated in medical literature, ChatGPT can assist researchers in exploring complex datasets and extracting meaningful insights.
AI models, including ChatGPT, have shown promise in data analysis tasks. GPT-3 has been utilized to analyze electronic health records and identified patterns associated with diseases, treatments, and outcomes relevant for public health concerns. 9 Recent developments in text creation have enabled the generation of synthetic clinical notes that might be used to train named entity recognition (NER) models for information extraction from natural clinical notes, reducing privacy concerns and boosting data availability. 10 By leveraging ChatGPT's data analysis capabilities, researchers can gain new perspectives and accelerate the pace of medical research. The impressive conversational and programming abilities of ChatGPT make it an attractive tool for facilitating the education of bioinformatics data analysis for beginners. 11
2.3. Increased AccessibilityChatGPT's user-friendly interface can improve accessibility for non-technical users, such as clinicians and medical practitioners. It allows them to interact with AI systems without needing extensive programming or data science expertise. 12 This accessibility can facilitate broader participation and collaboration within the medical community, enabling clinicians to leverage AI models for research and decision support.
Several studies have explored the application of AI models in improving accessibility in the medical domain for both clinicians as well as patients. ChatGPT has been powered to provide answers to clinical questions, enhancing accessibility to medical knowledge for healthcare professionals. 13 AI chatbots have also been employed to aid in patient education and provide personalized health recommendations, thereby enhancing accessibility and patient engagement. 14, 15
2.4. Overcoming the Language BarrierMany studies are published in local languages such as Chinese, Russian, Spanish, French, etc. because the writers may not be fluent in English. During systematic database searches, such foreign-language manuscripts may be ignored. By entering analytical data and the first draft of an article into ChatGPT, the AI software can assist in converting it into a more expressive manuscript, free of grammatical errors. This would improve the manuscript's quality and allow it to be published in English medical journals. ChatGPT also offers translation services and can translate a journal article written in any language into English.
Furthermore, ChatGPT can be utilized to overcome the communication gap between medical practitioners from different nations. As medical research and collaboration become more global, efficient communication between researchers and clinicians who speak different languages is more important than ever. ChatGPT can provide communication between these specialists, allowing them to share their expertise, debate complex medical problems, and collaborate on research initiatives without language constraints. 16
ChatGPT can be used to develop multilingual medical resources such as patient education materials, consent forms, and discharge instructions, in addition to real-time translation. Creating these resources in many languages would guarantee that patients understand their medical diagnoses, treatment options, and post-treatment care, regardless of their linguistic background. 16
While ChatGPT has shown impressive language generation capabilities, there can be instances where it produces inaccurate or unreliable information. Medical research and publishing require precise and evidence-based information, making it crucial to validate and fact-check the generated outputs from ChatGPT. 17 Researchers should exercise caution when relying solely on the outputs generated by ChatGPT and ensure that the information is cross verified with reputable sources. Because the content generated by ChatGPT and published in research literature can have a direct impact on patients' health and well-being, it is critical to prioritize accuracy and dependability to avoid any injury or misinformation. 18 Efforts should be made to develop robust validation methods and incorporate human expertise to ensure the fidelity of results.
For instance, ChatGPT remains unable to cite references to support the medical content it generates. It lacks access to external medical literature databases such as PubMed and Google Scholar, therefore often it generates hypothetical references that do not correspond to actual papers. 19, 20 This limitation in accuracy and reliability needs to be acknowledged.
3.2. Ethical ConsiderationsAI models like ChatGPT raise ethical concerns, such as data privacy, bias, and transparency. 21 Medical research involves sensitive patient data, and maintaining patient privacy is of utmost importance. Researchers should ensure that appropriate data protection measures are in place when utilizing ChatGPT or similar models. Beyond the question of what is collected, it is critical to protect patients from the misuse of confidential data outside of the doctor-patient relationship. 21 Also, legal ethics concerns can arise from the unclear allocation of responsibility when patient harm occurs. 22
Furthermore, biases in training data might result in biased outputs, which can have an impact on medical decisions and patient care. 23 Addressing and mitigating this would require transparency in the training process and identifying the potential biases in AI-generated outputs. The two most basic characteristics of transparency are information accessibility and comprehension; yet, information about algorithm functionality is frequently made purposely difficult to obtain. 21
3.3. Human Oversight and ExpertiseThe application of ChatGPT in medical research should be complemented by human expertise and oversight. While AI can assist in certain tasks, it is essential to have human input to interpret and validate the results, as well as ensure ethical standards are met. 24 AI chat models can be exceedingly sensitive to variations in the wording of questions and often struggle to clarify ambiguous prompts. Additionally ChatGPT may not always be able to differentiate between reliable and unreliable sources and this can limit its utility in research, as it may only duplicate previously known material without the addition of human-like scientific insight and awareness. 3
Researchers should adopt a human-AI collaborative approach, where human experts provide guidance and critical assessment of the outputs generated by ChatGPT. 25 Human oversight can help identify potential errors or biases, address limitations, and ensure that the outputs align with the goals and requirements of the research or publication. 26
3.4. Need for an Updated DatabaseThe ChatGPT database has not been updated since 2021. 27 Since medicine is an evolving field, not having an up-to-date database limits what the AI engine can generate and decreases its accuracy. Furthermore, updating the database would allow ChatGPT to better understand and respond to changing user needs, making it a more effective and dependable tool for medical professionals and researchers.
The integration of ChatGPT into medical research and publishing offers several benefits, including improved efficiency, enhanced data analysis, and increased accessibility. However, concerns regarding accuracy, ethical considerations, and the need for human oversight cannot be ignored. Researchers should exercise caution when utilizing ChatGPT, ensuring that the outputs are validated and cross-verified with reputable sources. Future research should focus on addressing these concerns, developing robust validation methods, and incorporating AI models like ChatGPT as valuable tools while maintaining human expertise and ethical standards.
Statement of Competing Interest: The authors declare no conflicts of interest.
[1] | Biswas S. ChatGPT and the Future of Medical Writing. 2023; 307(2). | ||
In article | View Article PubMed | ||
[2] | Sohail SS. A Promising Start and Not a Panacea: ChatGPT’s Early Impact and Potential in Medical Science and Biomedical Engineering Research. Ann Biomed Eng. August 2023: 1-5. | ||
In article | View Article | ||
[3] | Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023; 6. | ||
In article | View Article PubMed | ||
[4] | Sedaghat S. Early applications of ChatGPT in medical practice, education and research. Clin Med. 2023; 23(3): 278-279. | ||
In article | View Article PubMed | ||
[5] | Dahmen J, Kayaalp ME, Ollivier M, et al. Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword. Knee Surg Sports Traumatol Arthrosc. 2023; 31(4): 1187-1189. | ||
In article | View Article PubMed | ||
[6] | Ruksakulpiwat S, Kumar A, Ajibade A. Using ChatGPT in Medical Research: Current Status and Future Directions. J Multidiscip Healthc. 2023; 16: 1513-1520. | ||
In article | View Article PubMed | ||
[7] | Khan NA, Osmonaliev K, Sarwar MZ. Pushing the Boundaries of Scientific Research with the use of Artificial Intelligence tools: Navigating Risks and Unleashing Possibilities. Nepal J Epidemiol. 2023; 13(1): 1258. | ||
In article | View Article PubMed | ||
[8] | Delshad S, Dontaraju VS, Chengat V, Delshad SD, Dontaraju VS, Chengat V. Artificial Intelligence-Based Application Provides Accurate Medical Triage Advice When Compared to Consensus Decisions of Healthcare Providers. Cureus. 2021; 13(8). | ||
In article | View Article PubMed | ||
[9] | Jungwirth D, Haluza D. Artificial Intelligence and Public Health: An Exploratory Study. Int J Environ Res Public Heal 2023, Vol 20, Page 4541. 2023; 20(5): 4541. | ||
In article | View Article PubMed | ||
[10] | Li J, Zhou Y, Jiang X, et al. Are synthetic clinical notes useful for real natural language processing tasks: A case study on clinical entity recognition. J Am Med Informatics Assoc. 2021; 28(10): 2193-2201. | ||
In article | View Article PubMed | ||
[11] | Shue E, Liu L, Li B, Feng Z, Li X, Hu G. Empowering Beginners in Bioinformatics with ChatGPT. bioRxiv. March 2023:2023.03.07.531414. | ||
In article | View Article | ||
[12] | Datt M, Sharma H, Aggarwal N, Sharma S. Role of ChatGPT-4 for Medical Researchers. Ann Biomed Eng. August 2023: 1-3. | ||
In article | View Article PubMed | ||
[13] | Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pakistan J Med Sci. 2023; 39(2): 605. | ||
In article | View Article PubMed | ||
[14] | Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med. 2023; 13(3). | ||
In article | View Article PubMed | ||
[15] | Liu J, Wang C, Liu S. Utility of ChatGPT in Clinical Practice. J Med Internet Res 2023; 25e48568 https//www.jmir.org/2023/1/e48568. 2023;25(1):e48568. | ||
In article | View Article PubMed | ||
[16] | Overcoming Medical Language Barriers with ChatGPT: A Multilingual Solution. https://ts2.space/en/overcoming-medical-language-barriers-with-chatgpt-a-multilingual-solution/. Accessed September 10, 2023. | ||
In article | |||
[17] | Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened. J Med Internet Res. 2023; 25. | ||
In article | View Article PubMed | ||
[18] | Will ChatGPT transform healthcare? Nat Med 2023 293. 2023; 29(3): 505-506. | ||
In article | View Article PubMed | ||
[19] | Walters WH, Wilder EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. 2023; 13(1): 14045. | ||
In article | View Article PubMed | ||
[20] | Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus. 2023; 15(5). | ||
In article | View Article | ||
[21] | Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc. January 2020: 295-336. | ||
In article | View Article PubMed | ||
[22] | Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg. 2022; 9: 862322. | ||
In article | View Article PubMed | ||
[23] | Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 2023; 25e48009 https//www.jmir.org/2023/1/e48009. 2023; 25(1): e48009. | ||
In article | View Article PubMed | ||
[24] | Goodman RS, Patrinely JR, Osterman T, Wheless L, Johnson DB. On the cusp: Considering the impact of artificial intelligence language models in healthcare. Med. 2023; 4(3): 139-140. | ||
In article | View Article PubMed | ||
[25] | Temsah O, Khan SA, Chaiah Y, et al. Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts. Cureus. 2023; 15(4). | ||
In article | View Article | ||
[26] | Dergaa I, Chamari K, Zmijewski P, Saad H Ben. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport. 2023; 40(2): 615-622. | ||
In article | View Article PubMed | ||
[27] | Five Things You Need to Know About AI and ChatGPT. https://www.talsom.com/en/insights/five-things-you-need-to-know-about-ai-and-chatgpt/. Accessed September 11, 2023. | ||
In article | |||
Published with license by Science and Education Publishing, Copyright © 2023 Angad Singh, Tejasvi Dwivedi, Sheetal Bulchandani, Bhanujit Dwivedi, Rubina Sharma and Anahat Kaur
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
[1] | Biswas S. ChatGPT and the Future of Medical Writing. 2023; 307(2). | ||
In article | View Article PubMed | ||
[2] | Sohail SS. A Promising Start and Not a Panacea: ChatGPT’s Early Impact and Potential in Medical Science and Biomedical Engineering Research. Ann Biomed Eng. August 2023: 1-5. | ||
In article | View Article | ||
[3] | Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023; 6. | ||
In article | View Article PubMed | ||
[4] | Sedaghat S. Early applications of ChatGPT in medical practice, education and research. Clin Med. 2023; 23(3): 278-279. | ||
In article | View Article PubMed | ||
[5] | Dahmen J, Kayaalp ME, Ollivier M, et al. Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword. Knee Surg Sports Traumatol Arthrosc. 2023; 31(4): 1187-1189. | ||
In article | View Article PubMed | ||
[6] | Ruksakulpiwat S, Kumar A, Ajibade A. Using ChatGPT in Medical Research: Current Status and Future Directions. J Multidiscip Healthc. 2023; 16: 1513-1520. | ||
In article | View Article PubMed | ||
[7] | Khan NA, Osmonaliev K, Sarwar MZ. Pushing the Boundaries of Scientific Research with the use of Artificial Intelligence tools: Navigating Risks and Unleashing Possibilities. Nepal J Epidemiol. 2023; 13(1): 1258. | ||
In article | View Article PubMed | ||
[8] | Delshad S, Dontaraju VS, Chengat V, Delshad SD, Dontaraju VS, Chengat V. Artificial Intelligence-Based Application Provides Accurate Medical Triage Advice When Compared to Consensus Decisions of Healthcare Providers. Cureus. 2021; 13(8). | ||
In article | View Article PubMed | ||
[9] | Jungwirth D, Haluza D. Artificial Intelligence and Public Health: An Exploratory Study. Int J Environ Res Public Heal 2023, Vol 20, Page 4541. 2023; 20(5): 4541. | ||
In article | View Article PubMed | ||
[10] | Li J, Zhou Y, Jiang X, et al. Are synthetic clinical notes useful for real natural language processing tasks: A case study on clinical entity recognition. J Am Med Informatics Assoc. 2021; 28(10): 2193-2201. | ||
In article | View Article PubMed | ||
[11] | Shue E, Liu L, Li B, Feng Z, Li X, Hu G. Empowering Beginners in Bioinformatics with ChatGPT. bioRxiv. March 2023:2023.03.07.531414. | ||
In article | View Article | ||
[12] | Datt M, Sharma H, Aggarwal N, Sharma S. Role of ChatGPT-4 for Medical Researchers. Ann Biomed Eng. August 2023: 1-3. | ||
In article | View Article PubMed | ||
[13] | Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pakistan J Med Sci. 2023; 39(2): 605. | ||
In article | View Article PubMed | ||
[14] | Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med. 2023; 13(3). | ||
In article | View Article PubMed | ||
[15] | Liu J, Wang C, Liu S. Utility of ChatGPT in Clinical Practice. J Med Internet Res 2023; 25e48568 https//www.jmir.org/2023/1/e48568. 2023;25(1):e48568. | ||
In article | View Article PubMed | ||
[16] | Overcoming Medical Language Barriers with ChatGPT: A Multilingual Solution. https://ts2.space/en/overcoming-medical-language-barriers-with-chatgpt-a-multilingual-solution/. Accessed September 10, 2023. | ||
In article | |||
[17] | Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened. J Med Internet Res. 2023; 25. | ||
In article | View Article PubMed | ||
[18] | Will ChatGPT transform healthcare? Nat Med 2023 293. 2023; 29(3): 505-506. | ||
In article | View Article PubMed | ||
[19] | Walters WH, Wilder EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. 2023; 13(1): 14045. | ||
In article | View Article PubMed | ||
[20] | Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus. 2023; 15(5). | ||
In article | View Article | ||
[21] | Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc. January 2020: 295-336. | ||
In article | View Article PubMed | ||
[22] | Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg. 2022; 9: 862322. | ||
In article | View Article PubMed | ||
[23] | Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 2023; 25e48009 https//www.jmir.org/2023/1/e48009. 2023; 25(1): e48009. | ||
In article | View Article PubMed | ||
[24] | Goodman RS, Patrinely JR, Osterman T, Wheless L, Johnson DB. On the cusp: Considering the impact of artificial intelligence language models in healthcare. Med. 2023; 4(3): 139-140. | ||
In article | View Article PubMed | ||
[25] | Temsah O, Khan SA, Chaiah Y, et al. Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts. Cureus. 2023; 15(4). | ||
In article | View Article | ||
[26] | Dergaa I, Chamari K, Zmijewski P, Saad H Ben. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport. 2023; 40(2): 615-622. | ||
In article | View Article PubMed | ||
[27] | Five Things You Need to Know About AI and ChatGPT. https://www.talsom.com/en/insights/five-things-you-need-to-know-about-ai-and-chatgpt/. Accessed September 11, 2023. | ||
In article | |||