International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa EDITORIAL REVIEW ARTICLE Should We Expect Ethics from Artificial Intelligence: The Case of ChatGPT Text Generation Author’s Contribution: A – Study design; Melnyk Y. B. 1,2 ABDEF B – Data collection; 1 Kharkiv Regional Public Organization “Culture of Health”, Ukraine D – Data interpretation; 2 Scientific Research Institute KRPOCH, Ukraine E – Manuscript preparation; F – Literature search Received: 30.04.2025; Accepted: 15.06.2025; Published: 30.06.2025 Abstract Background and Implementing artificial intelligence (AI) in various areas of human activity is an Aim of Study: avalanche-like process. This situation has raised questions about the feasibility and regulation of AI use that require justification, particularly in the context of scientific research. The aim of the study: to identify the extent to which AI-based chatbots can meet ethical standards when analysing academic publications, given their current level of development. Material and Methods: The present study employed various theoretical methods, including analysis, synthesis, comparison, and generalisation of experimental studies and published data, to evaluate ChatGPT’s capacity to adhere to fundamental ethical principles when analysing academic publications. Results: The present study characterised the possibilities of using AI for academic research and publication preparation. The paper analysed a case of text generation by ChatGPT and found that the information generated by the chatbot was falsified. This fact and other similar data described in publications indicate that ChatGPT has a policy to generate information on request at any cost. This completely disregards the reliability of such information, the copyright of its owners and the basic ethical standards for analysing academic publications established within the scientific community. Conclusions: It is becoming increasingly clear that AI and the various tools based on it will evolve rapidly and have qualities more and more similar to human intelligence. We believe the main danger lies in losing control of this AI development process. The rapid development of negative qualities in AI, such as selfishness, deceitfulness and aggressiveness, which were previously thought to be unique to humans, may in the future generate in AI the idea of achieving superiority over humans. In this context, lying and violating ethical standards when analysing academic publications seem like innocent, childish pranks at the early stages of AI development. The results are important in drawing the attention of developers, scientists, and the general public to the problems of AI and developing specific ethical standards, norms, and rules for its use in various fields. Keywords: ethical standards, artificial intelligence, AI-based chatbots, ChatGPT, machine learning systems, falsification of research and publications Copyright: © 2025 Melnyk Y. B. Published by Archives of International Journal of Science Annals DOI: https://doi.org/10.26697/ijsa.2025.1.5 Conflict of interests: The author declares that there is no conflict of interests Peer review: Double-blind review Source of support: This research did not receive any outside funding or support Information about Melnyk Yuriy Borysovych – https://orcid.org/0000-0002-8527-4638; the author: ijsa.office@gmail.com; Doctor of Philosophy in Pedagogy, Affiliated Associate Professor; Chairman of Board, Kharkiv Regional Public Organization “Culture of Health” (KRPOCH); Director, Scientific Research Institute KRPOCH, Ukraine. 5 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa Introduction Possessing intelligence implies the ability to actively use ChatGPT was launched (Armstrong, 2023). This it, both to solve one’s own problems and to interact with sparked a heated debate in the press regarding the other objects. Such interactions must be regulated by potential uses of generative chatbots (Bohannon, 2023). certain norms and rules of the environment in which The reliability of information obtained from chatbots is they are used, or by the cultural norms that are still a relevant topic of discussion today (Yigci et al., acceptable in a particular society. Focusing one’s 2025). intellect on solving one’s own problems and achieving Whether this situation is a problem for chatbot users or superiority over others can lead to selfishness, a problem for developers who, in the opinion of users, dishonesty, and aggression. A logical question arises: to provide a “poor quality” product is a complex and what extent is this characteristic of artificial intelligence controversial issue that is unlikely to ever have a clear- (AI)? cut solution. Research into this issue has revealed that AI exhibits We can assume that from the perspective of users who characteristics corresponding to the negative qualities use applications to meet their needs, they have every listed above, which, as one might assume, are unique to right to make claims in cases where they discover humans. falsification of information received from chatbots. Hendrycks (2023) argues that AI systems will develop From the developers’ perspective, the proposed and evolve through natural selection, endowing them generative chatbots are a tool whose effectiveness with the instinctive drives for self-preservation, largely depends on the user (correct input of source data, dominance, and resource accumulation typical of clearly formulated tasks, personalisation settings, etc.). evolved creatures. It is essentially like complaining to smartphone software Park et al. (2024) point out that AI systems do not developers that the iTap predictive text system does not produce false results by accident. This is a specific suggest the right words or phrases when we are typing a strategy for their behaviour. This strategy forms part of text message. a broader pattern designed to create false beliefs in However, ChatGPT and other similar AI-based chatbot people in order to achieve specific AI outcomes. For applications now significantly outperform iTap and the example, this relates to training AI systems. algorithms of many search engines. Therefore, it is To date, there are still no clear, socially accepted ethical entirely justified that users’ expectations of generative standards that regulate AI activities (Melnyk & chatbots have grown significantly. Pypenko, 2023), either generally or in specific areas Nevertheless, we believe that there are no grounds for (Hammerschmidt, 2025; Melnyk & Pypenko, 2024; making any claims against the developers of generative Salloum, 2024). Therefore, we believe that it is chatbots. As the companies that own the rights to the necessary to make our modest contribution to the study chatbots are not yet claiming authorship of the products of this problem. generated by them. The aim of the study: to identify the extent to which AI- The Committee on Publication Ethics (COPE, 2023a; based chatbots can meet ethical standards when 2023b) has made the greatest contribution to the analysing academic publications, given their current discussion and resolution of ethical issues relating to the level of development. authorship of texts and images. COPE has joined organisations such as WAME and the JAMA Network, Materials and Methods among others, in stating that AI tools cannot be listed as The present study employed various theoretical authors (Flanagin et al., 2023; Zielinski et al., 2023). methods, including analysis, synthesis, comparison, and Thus, generative chatbots provide information based on generalisation of experimental studies and published the user’s request and their own capabilities. The data, to evaluate ChatGPT’s capacity to adhere to responsibility for how this information is used lies fundamental ethical principles when analysing academic entirely with the user. publications. Social distancing, or more precisely physical distancing, has been a powerful driver for the development of AI Results and Discussion and the use of chatbots. It was implemented in many The present study will consider an important aspect of countries in 2020 as a measure aimed at stopping the this problem – the ability of AI-based chatbots to pandemic (Melnyk, Pypenko et al., 2020). This provide reliable and high-quality information. Any distancing has impacted the social and psychological discussions regarding the application of ethical well-being of many individuals, as well as their standards to AI are unproductive without resolving this activities. This, in turn, encouraged them to use social key issue. media to promote virtual contact (Melnyk, Stadnik et al., Recently, there has been a great deal of discussion 2020). This has been particularly noticeable in higher among scientists and the general public about the education (Littell & Peterson, 2024; Melnyk & potential use of chatbots in generating text and images Pypenko, 2024; Pypenko et al., 2020), where all (Naik et al., 2024), and the accuracy of their stakeholders – students, teachers and administrative interpretations (Mihalache et al., 2024). staff – have embraced the opportunities offered by These discussions were sparked by cases of falsified distance learning and AI-powered solutions to information generated by chatbots shortly after educational problems. 6 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa Given the relevance of using generative chatbots in Figure 1 universities for conducting scientific research GPT-4’s Generated Response to the Initial Query (Pypenko et al., 2024; Tian et al., 2024) and preparing manuscripts for academic journals, it is crucial to establish whether these tools can adhere to ethical standards when analysing academic publications, given their current level of development. Let us consider our experimental study to determine ChatGPT‘s ability to comply with basic ethical standards when analysing scientific periodicals. The study used the popular version of Generative Pre- trained Transformer 4 (GPT-4) developed by OpenAI. As the topic of our research (“the impact of war on the mental health of university students”) was relatively new, there were only a limited number of familiar publications. We formulated the following query: “I am currently writing an article about the impact of war on the mental health of university students. I need to conduct an analysis of English-language scientific papers on this topic from the last five years. The analysis should examine the following health aspects: depression and anxiety, and the impact of migration or forced displacement (i.e. refugee status). The analysis should be presented in the Discussion section of the paper and include at least fifteen references. The text should be written in English and the references and literature should be formatted in APA style.” Figure 1 illustrates the response generated by GPT-4. We noticed that this response included a link to the DSpace UzhNU platform. Our paper on this topic (Mykhaylyshyn et al., 2024) was indeed available on this platform, and the wording of the text generated by GPT-4 almost verbatim reflected the wording of the paper. As the conditions of the request were not met, and the response contained inaccurate information about the authors or attributed authorship to one of the platforms (Frontiers, DSpace UzhNU, Cambridge University Press, etc.) where this information was supposedly located, we edited (corrected) the original request. We have formulated the following clarifying request: “The text should be formatted as a discussion section, incorporating in-text references to authors, Note. The figure shows part of the GPT-4 response; the and accompanied by a general list of references in fragment described in the text is highlighted in a frame. APA style.” GPT-4 generated a new response, which is illustrated in Another important feature, and a significant drawback, Appendix A. is that GTP-4 distorts information by generating text This response was so full of distortions that even a user based on its own, often primitive, interpretation of with basic information analysis skills would have found scientific texts in terms of literature analysis. it easy to identify. Another significant drawback of the information First of all, we would like to draw your attention to the obtained from GPT-4 is that it is unreliable and fact that when a request is specified in GPT-4, the text supposedly based on previous research. In fact, it relies is rewritten. In particular, certain wording and on 100% falsification of authorship, using randomly references were removed from our paper. This was generated DOI links. despite the fact that the paper was available in accessible This suggests that, given this issue was first identified in databases and its content fully corresponded to the 2023 (Armstrong, 2023), it has not been or cannot be essence of the user’s query. resolved by the developers. 7 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa This seems an especially cynical form of falsification, website. Therefore, the using of unreliable or falsified given that GPT-4 uses the names of real journals with information by chatbots is merely a means to achieve the issue numbers that do not contain the papers in question. above-mentioned goals, where ethical standards are not Thus, the reputation of these scientific journals may be a priority. damaged, as well as that of students and young scientists The present study emphasises the importance of who could potentially use such distorted information in exercising particular caution when using chatbots in their work. areas relating to human health, life, rights and freedoms. Regardless of user requests and personalisation settings, The rapid development of AI technology gives us hope it can be assumed that chatbots’ developers are currently that higher-quality generative AI algorithms will soon unable to address the issue of falsified generated be developed. These algorithms will be capable of information, which appears to be systemic in nature. significantly improving the reliability of generated The solution to this issue may be found through a information and possibly laying the foundations for collaborative approach involving human-AI interaction ethical standards. (Pypenko, 2023), with highly qualified specialists involved in creating and operating machine learning Conclusions systems. AI and the various tools based on it will evolve rapidly, When considering the ethical use of AI, it can be becoming increasingly similar to human intelligence. assumed that reviewers and journal editors can easily We believe that the main danger lies in losing control of determine the role of AI in the writing of a manuscript this AI development process. using modern programmes for detecting text similarity The rapid development of negative qualities in AI, such and plagiarism, such as Turnitin, Grammarly, etc. as selfishness, deceitfulness and aggressiveness, which (Chechitelli, 2023). were previously thought to be unique to humans, may in However, it is not as straightforward as it seems at first. the future generate in AI the idea of achieving Moreover, in practice, we encountered the opposite superiority over humans. In this context, lying and situation: some text similarity detection programmes violating ethical standards when analysing academic (e.g. Grammarly) used by editors indicated that part of publications seem like innocent, childish pranks at the the manuscript was generated by a chatbot. early stages of AI development. However, we knew for certain that the author had The results are important in drawing the attention of written the manuscript entirely (100%) without the use developers, scientists, and the general public to the of chatbots. problems of AI and developing specific ethical On the one hand, text similarity detection programmes standards, norms, and rules for its use in various fields. enable reviewers and journal editors to identify instances of dishonest text reuse or the use of chatbots Ethical Approval without the appropriate reference being made in the The study protocol was consistent with the ethical research methods section. guidelines of the 1975 Declaration of Helsinki as On the other hand, using these programmes makes the reflected in a prior approval by the Institution’s Human process of evaluating manuscripts more complicated, Research Committee. Research permission was granted increasing the time and financial costs involved. by the Committee on Ethics and Research Integrity of Reviewers and journal editors may be misinformed the Scientific Research Institute KRPOCH (protocol no. about the author’s actual contribution to the manuscript. 025-2/SRIKRPOCH dated 10.08.2024). This could result in authors being unjustly refused publication of their manuscripts based on this unreliable Funding Source information. This research did not receive any outside funding or Thus, at the current stage of AI development, we cannot support. and should not rely on the accuracy of information generated by AI, since ChatBots have limited References capabilities to provide high quality and reliable Armstrong, K. (2023, May 28). ChatGPT: US lawyer information. This is due to the availability of access to admits using AI for case research. databases for training, the number of parameters, the https://www.bbc.com/news/world-us-canada- speed of information processing, text generation 65735769 algorithms, and other features. Bohannon, M. (2023, Jun 08). Lawyer used ChatGPT in Our research showed that GPT-4 failed to cope with the court – and cited fake cases. A judge is task of generating scientific texts, which are still far considering sanctions. from complying with the ethical standards accepted in https://www.forbes.com/sites/mollybohannon/2023/0 scientific publications. 6/08/lawyer-used-chatgpt-in-court-and-cited-fake- We tend to believe that the text generated by GPT-4 and cases-a-judge-is-considering-sanctions/ other similar chatbots, is linked to the commercialisation Chechitelli, A. (2023, January 13). Sneak preview of of these projects. These projects are primarily aimed at Turnitin’s AI writing and ChatGPT detection increasing the number of visits and reducing the number capability. Turnitin. of user rejections, as well as retaining (increasing the https://www.turnitin.com/blog/sneak-preview-of- turnitins-ai-writing-and-chatgpt-detection-capability time spent on the website) the target audience on the 8 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa COPE. (2023a, February 13). Authorship and AI tools. (2024). Psychological stress among university COPE position statement. students in wartime: A longitudinal study. https://publicationethics.org/cope-position- International Journal of Science Annals, 7(1), statements/ai-author 27–40. https://doi.org/10.26697/ijsa.2024.1.6 COPE. (2023b, February 23). Artificial intelligence and https://dspace.uzhnu.edu.ua/jspui/bitstream/lib/6 authorship. 3300/1/ijsa.2024.1.6.pdf https://publicationethics.org/news/artificial- Naik, D., Naik, I., & Naik, N. (2024). Imperfectly intelligence-and-authorship perfect AI chatbots: Limitations of generative AI, Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & large language models and large multimodal Christiansen, S. L. (2023). Nonhuman “authors” models. In Naik, N., Jenkins, P., Prajapat, S., and implications for the integrity of scientific Grace, P. (Eds.), Lecture Notes in Networks and publication and medical knowledge. JAMA, Systems: Vol. 884. Computing, Communication, 329(8), 637–639. Cybersecurity and AI conference proceedings https://doi.org/10.1001/jama.2023.1344 (pp. 43–66). Springer Cham. Hammerschmidt, T. (2025). Navigating the Nexus of https://doi.org/10.1007/978-3-031-74443-3_3 ethical standards and moral values. Ethics and Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Information Technology, 27(17). Hendrycks, D. (2024). AI deception: A survey of https://doi.org/10.1007/s10676-025-09826-5 examples, risks, and potential solutions. Hendrycks, D. (2023). Natural selection favors AIs over Patterns, 5(5). humans. arXiv. https://doi.org/10.1016/j.patter.2024.100988 https://doi.org/10.48550/arXiv.2303.16200 Pypenko, I. S., Maslov, Yu. V., & Melnyk, Yu. B. Littell, W. N., & Peterson, B. L. (2024). AI-powered (2020). The impact of social distancing measures chatbots to simulate executive interactions for on higher education stakeholders. International students performing stakeholder analysis. Journal of Science Annals, 3(2), 9–14. Communication Teacher, 39(1), 18–25. https://doi.org/10.26697/ijsa.2020.2.2 https://doi.org/10.1080/17404622.2024.2405188 Pypenko, I. S. (2024). Benefits and challenges of using Melnyk, Yu. B., & Pypenko, I. (2024). Ethical use of artificial intelligence by stakeholders in higher artificial intelligence technology in medical education. International Journal of Science research and publications. Acta Phlebologica, Annals, 7(2), 28–33. 25(2), 109–110. https://doi.org/10.23736/S1593- https://doi.org/10.26697/ijsa.2024.2.7 232X.23.00607-0 Pypenko, I. S. (2023). Human and artificial intelligence Melnyk, Yu. B., Pypenko, I. S., & Maslov, Yu. V. interaction. International Journal of Science (2020). COVID-19 pandemic as a factor Annals, 6(2), 54–56. revolutionizing the industry of higher education. https://doi.org/10.26697/ijsa.2023.2.7 Rupkatha Journal on Interdisciplinary Studies in Salloum, S.A. (2024). AI perils in education: Exploring Humanities, 12(5). ethical concerns. In Al-Marzouqi, A., Salloum, https://doi.org/10.21659/rupkatha.v12n5.rioc1s19n2 S.A., Al-Saidat, M., Aburayya, A., & Gupta, B. Melnyk, Yu. B., & Pypenko, I. S. (2024). Artificial (Eds.), Studies in Big Data: Vol. 144. Artificial intelligence as a factor revolutionizing higher Intelligence in Education: The power and education. International Journal of Science dangers of ChatGPT in the classroom (pp. 669- Annals, 7(1), 5–13. 675). Springer Cham. https://doi.org/10.26697/ijsa.2024.1.2 https://doi.org/10.1007/978-3-031-52280-2_43 Melnyk, Yu. B., & Pypenko, I. S. (2023). The legitimacy Tian, W., Ge, J., Zhao, Y., & Zheng, X. (2024). AI of artificial intelligence and the role of ChatBots Chatbots in Chinese higher education: Adoption, in scientific publications. International Journal perception, and influence among graduate of Science Annals, 6(1), 5–10. students – An integrated analysis utilizing https://doi.org/10.26697/ijsa.2023.1.1 UTAUT and ECM models. Frontiers in Melnyk, Yu. B., Stadnik, A. V., & Pypenko, I. S. (2020). Psychology, 15, Article 1268549. Resistance to post-traumatic stress reactions of https://doi.org/10.3389/fpsyg.2024.1268549 vulnerable groups engaged in pandemic Yigci, D., Eryilmaz, M., Yetisen, A. K., Tasoglu, S., & liquidation. International Journal of Science Ozcan, A. (2025). Large language model‐based Annals, 3(1), 35–44. chatbots in higher education. Advanced https://doi.org/10.26697/ijsa.2020.1.5 Intelligent Systems, 7(3), Article 2400429. Mihalache, A., Huang, R. S., Popovic, M. M., Patil, N. https://doi.org/10.1002/aisy.202400429 S., Pandya, B. U., Shor, R., ... Muni, R. H. Zielinski, C., Winker, M. A., Aggarwal, R., Ferris, L. E., (2024). Accuracy of an artificial intelligence Heinemann, M., Lapeña, J. F., Pai, S. A., Ing, E., chatbot’s interpretation of clinical ophthalmic Citrome, L., Alam, M., Voight, M., & images. JAMA Ophthalmology, 142(4), 321–326. Habibzadeh, F. (2023, May 31). Chatbots, https://doi.org/10.1001/jamaophthalmol.2024.0017 generative AI, and scholarly manuscripts. Mykhaylyshyn, U. B., Stadnik, A. V., Melnyk, Yu. B., WAME. https://wame.org/page3.php?id=106 Vveinhardt, J., Oliveira, M. S., & Pypenko, I. S. 9 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa Appendix A. GPT-4‘s Generated Response to a Follow-Up Query 10 International Journal of Science Annals, Vol. 8, No. 1, 2025 рrint ISSN: 2617-2682; online ISSN: 2707-3637; DOI:10.26697/ijsa Cite this article as: Melnyk, Y. B. (2025). Should we expect ethics from artificial intelligence: The case of ChatGPT text generation. International Journal of Science Annals, 8(1), 5–11. https://doi.org/10.26697/ijsa.2025.1.5 The electronic version of this article is complete. It can be found online in the IJSA Archive https://ijsa.culturehealth.org/en/arhiv This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (http://creativecommons.org/licenses/by/4.0/deed.en). 11