生成式AI在学术写作中的规范约束与应用边界
AI使用披露制度:原则、阈值分级与期刊/学科政策落地
聚焦“生成式AI使用披露”的规范设计与可操作落地:包括披露为何必要、何时/以何种阈值属于需披露的实质性使用、披露粒度与措辞、以及不同期刊/学科政策差异与执行状况;同时强调可问责性导向的披露框架与编辑方声明。
- Disclosing generative AI use for writing assistance should be voluntary(Mohammad Hosseini, Bert Gordijn, G. Kaebnick, Kristi L. Holmes, 2025, Research Ethics)
- Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary?(David B. Resnik, Mohammad Hosseini, 2025, Accountability in Research)
- The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing.(A. Tang, K. Li, K. Kwok, Liujiao Cao, Stanley Luong, W. Tam, 2023, Journal of Nursing Scholarship)
- When and how to disclose AI use in academic publishing: AMEE Guide No.192(J. Cleland, E. Driessen, K. Masters, L. Lingard, L. A. Maggio, 2025, Medical Teacher)
- The Presence and Nature of AI-Use Disclosure Statements in Medical Education Journals: A bibliometric study(M. Ans, Lauren A. Maggio, Hamza Algodi, Joseph A. Costello, Erik W. Driessen, K. Oswald, Lorelei Lingard, 2025, Perspectives on …)
- Documenting Disclosure: Limited Reporting of Generative AI Usage in Radiology Research Manuscripts.(D. Jonah Barrett, R. Heng, J. Perchik, 2025, Academic Radiology)
- Variability of Guidelines and Disclosures for AI-Generated Content in Top Surgical Journals(Sina J. Torabi, Michael J. Warn, B. Bitner, Y. Haidar, T. Tjoa, E. C. Kuan, 2024, Surgical Innovation)
- Do Ophthalmology Journals Have AI Policies for Manuscript Writing?(Amr Almobayed, Taher K. Eleiwa, Omar Badla, A. Khodor, R. Ruiz-Lozano, A. Elhusseiny, 2024, American Journal of Ophthalmology)
- Neurosurgical journals’ policies on artificial intelligence use in manuscript preparation and peer review(A. A. Mohamed, Saahas Rajendran, Daniel Colome, Emma C Sargent, Clemens M Schirmer, Meena Vessell, Brandon Lucke-Wold, Akshay Sharma, O. Adogwa, Stephen Pirris, 2025, Neurosurgical Review)
- The machine in the manuscript: editorial dilemmas(Donghee Shin, Angelika Suchanová, J. White, Liam Magee, Manh-Tung Ho, Houda Chakiri, 2025, AI & SOCIETY)
- Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing(G. Kaebnick, D. Magnus, Audiey Kao, Mohammad Hosseini, David B. Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn, Mark J. Cherry, 2023, Medicine, Health Care and Philosophy)
- Responsible use of large language models in manuscript authorship, peer review, and editorial processes: a Delphi consensus among editors-in-chief of anaesthesia and pain medicine journals (RULE-AP).(A. De Cassai, B. Dost, John G. Augoustides, Leonard Azamfirei, Z. Alanoğlu, Liana Azi, J. A. Calvache, Vladimír Černý, S. D. De Hert, A. Eldawlatly, M. K. Farber, Diogo Sobreira-Fernandes, M. Fettiplace, D. Galante, Rakesh Garg, Heidi V Goldstein, A. Abad-Gurumeta, Lalit Gupta, Hugh C. Hemmings, Christopher A. Jones, M. Hochberg, J. Katz, Hyun Kang, G. Talu, D. Kraychete, R. Landau, Sangseok Lee, Hillary D. Lum, C. Lundgren, T. Makuloluwa, P. Martelletti, Tonya M. Palermo, Philip J. Peyton, P. Poisbeau, J. Rathmell, Antoine Roquilly, S. K. Schwarz, Madhuragauri Shevade, P. Sloan, BobbieJean Sweitzer, Ary Serpa Neto, Philip F Stahel, Zerrin Özköse Şatirlar, Alparslan Turan, Dennis C. Turk, M. Valeriani, Mads U. Werner, Paul J. Young, I. B. Zabolotskikh, K. Zacharowski, Szymon Zdanowski, 2026, British Journal of Anaesthesia)
- Authorship Statement for Generative Artificial Intelligence: Assuring Trust and Accountability(Joseph Crawford, Alison Purvis, Averil Grieve, Louise Taylor, 2026, Journal of University Teaching and Learning Practice)
- Consensus on the Application of Generative Artificial Intelligence in Medical Manuscript Writing(Guangtao Huang, Rong Zhong, Giovanna Orsini, Rei Ogawa, Jiming Kong, Jianglin Zhang, Mingxing Lei, Siwei Wang, Yiqiang Zhan, Folke Sjoberg, Ilaria Dal’ Pra, Gaofeng Wang, Xusheng Wang, Xiaozhuo Wu, Liangqiao Gui, Zhe Li, Bronwyn Griffin, Lufeng Ding, David N. Herndon, David D Greenhalgh, Jing-Liang Huan, Qianqian Liu, Lei Jiang, Michael Nilsson, Yikun Ji, Yanni Wang, Fenfang Wu, Yixin Zhang, A. Zamboni, Rui Guo, M. Raucci, Qihui Zhou, Zhaohong Chen, L. P. Vana, Hong Chen, Ren-ju Song, Jianan Li, Jun Wu, 2026, Regenesis Repair Rehabilitation)
- RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES(Tran Huu Tuyen, 2026, Veredas do Direito)
学术诚信与伦理风险:总体框架、失范机理、检测困境与不当使用后果
从“学术诚信/科研伦理”总体风险机理出发,讨论不当依赖、未披露与失范行为如何破坏科学可信度与公平性;同时覆盖检测困境、对评估/学习过程的影响以及行为后果(学生与教育情境)。该组强调诚信治理需要同时考虑制度、能力与伦理信念,而不仅是技术检测。
- Writing for Machines, Formatting Originality: Plagiarism Detection and the Automation of Authorship(Ethan Stoneman, Joseph C. Packer, 2026, Theory, Culture & Society)
- Giving Credit Where Credit is Due: An Artificial Intelligence Contribution Statement for Research Methods Writing Assignments(Nicole Alea Albada, Vanessa E. Woods, 2024, Teaching of Psychology)
- Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review(Sang-Jun Kim, 2024, Science Editing)
- Publication Ethics in the Era of Artificial Intelligence(Zafer Koçak, 2024, Journal of Korean Medical Science)
- Impact of generative AI in academic integrity and learning outcomes: A case study in the Upper East Region(JK Wiredu, N Seidu Abuba, 2024, Asian Journal of Research …)
- Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda(Kyle Bittle, Omar El-Gayar, 2025, Information)
- Generative AI in Science Education: A Learning Revolution or a Threat to Academic Integrity? A Bibliometric Analysis(M. D. H. Wirzal, N. A. H. Md Nordin, N. S. Abd Halim, M. A. Bustam, 2024, Jurnal Penelitian dan Pengkajian Ilmu Pendidikan: e-Saintika)
- The impact of generative AI on academic integrity of authentic assessments within a higher education context(Alexander K. Kofinas, C. Tsay, D. Pike, 2025, British Journal of Educational Technology)
- Investigating the Transformative Impact of Generative AI on Academic Integrity Across Diverse Educational Domains(Ajay Dhruv, Sayantika Saha, Shivani Tyagi, Vijal Jain, 2024, 2024 2nd International Conference on Advancement in Computation & Computer Technologies (InCACCT))
- Impact of generative AI in academic integrity and learning outcomes: A case study in the Upper East Region(JK Wiredu, N Seidu Abuba, 2024, Asian Journal of Research …)
- ChatGPT in higher education: Considerations for academic integrity and student learning(M Sullivan, A Kelly, P McLaughlan, 2023, Journal of Applied Learning …)
- ChatGPT and the Rise of Generative AI: Threat to Academic Integrity?(Damian Okaibedi, 2023, Journal of Responsible Technology)
- The rapid rise of generative AI and its implications for academic integrity: Students' perceptions and use of chatbots for assistance with assessments(Jan Henrik Gruenhagen, P. Sinclair, Julie-Anne Carroll, Philip R.A. Baker, Ann Wilson, Daniel Demant, 2024, Computers and Education: Artificial Intelligence)
- The impact of generative AI on academic integrity of authentic assessments within a higher education context(Alexander K. Kofinas, C. Tsay, D. Pike, 2025, British Journal of Educational Technology)
- <p><b><span>DFAS-EEP-AI: </span></b><b><span>The Disclosure and Detection Extension for Artificial Intelligence Use in Academic Publishing</span></b></p>(Hasan Alaali, 2025, SSRN Electronic Journal)
- Artificial intelligence-created personal statements compared with applicant-written personal statements: a survey of obstetric anesthesia fellowship program directors in the United States.(A. M. Ruiz, M. Kraus, K.W. Arendt, D. Schroeder, E. E. Sharpe, 2024, International Journal of Obstetric Anesthesia)
- “Helping Me Versus Doing It for Me”: Designing for Agency in LLM-Infused Writing Tools for Science Journalism(Sachita Nishal, Mina Lee, Nicholas Diakopoulos, Jennifer Wortman Vaughan, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Academic Integrity Within the Medical Curriculum in the Age of Generative Artificial Intelligence(Kldiashvili Ekaterina, Mamiseishvili Ana, Zarnadze Maia, 2025, Health Science Reports)
- Academic integrity in the generative AI (GenAI) era: a collective editorial response(A. Tlili, Melissa Bond, Aras Bozkurt, Khalid H. Arar, T. Chiu, Pericles 'Asher' Rospigliosi, 2025, Interactive Learning Environments)
- Academic Integrity and Artificial Intelligence: An Overview(Rahul Kumar, Sarah Elaine Eaton, Michael Mindzak, Ryan Morrison, 2024, Springer International Handbooks of Education)
- The Challenge of Academic Integrity in the Age of Generative Artificial Intelligence(Chao Huang, A. Alhur, Mehreen Azam, S. Naeem, 2025, Libri)
- Emerging Research and Policy Themes on Academic Integrity in the Age of Chat GPT and Generative AI(Sterling Plata, Maria Ana G. De Guzman, Arthea Quesada, 2023, Asian Journal of University Education)
- Ethics in publishing: From plagiarism to artificial intelligence.(F. Alnaimat, Abdel Rahman Feras Alsamhori, B. Seiil, Ainur B Qumar, O. Zimba, 2026, Autoimmunity Reviews)
- Research misconduct: Use of generative artificial intelligence in writing may lower the threshold.(Shigeki Matsubara, Daisuke Matsubara, 2024, European Journal of Obstetrics & Gynecology and Reproductive Biology)
- On Undisclosed or Improper Use of Generative AI and Sanctions(Michael Pflanzer, Veljko Dubljević, 2026, AJOB Neuroscience)
- AI for scientific integrity: detecting ethical breaches, errors, and misconduct in manuscripts(Diogo Pellegrina, Mohamed Helmy, 2025, Frontiers in Artificial Intelligence)
- Academic Integrity and Artificial Intelligence: An Overview(Rahul Kumar, Sarah Elaine Eaton, Michael Mindzak, Ryan Morrison, 2024, Springer International Handbooks of Education)
作者性与责任边界:可问责的贡献界定、合成作者风险与检测局限
聚焦作者性与责任边界:AI介入如何扰乱“人类作者”概念、需要怎样界定贡献与可控性、以及合成/结构性自动生成带来的“谁负责”的问责难题;同时结合检测器可靠性不足与课堂/学习者对作者性认知的研究,用以支撑“作者性完整性不可转移”的治理结论。
- Writing for Machines, Formatting Originality: Plagiarism Detection and the Automation of Authorship(Ethan Stoneman, Joseph C. Packer, 2026, Theory, Culture & Society)
- The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing.(A. Tang, K. Li, K. Kwok, Liujiao Cao, Stanley Luong, W. Tam, 2023, Journal of Nursing Scholarship)
- The machine in the manuscript: editorial dilemmas(Donghee Shin, Angelika Suchanová, J. White, Liam Magee, Manh-Tung Ho, Houda Chakiri, 2025, AI & SOCIETY)
- Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing(G. Kaebnick, D. Magnus, Audiey Kao, Mohammad Hosseini, David B. Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn, Mark J. Cherry, 2023, Medicine, Health Care and Philosophy)
- When AI Writes the Letters: Recognizing Synthetic Authorship Patterns in Medical Publishing(É. Lupon, G. Micicoi, 2026, Publications)
- Attributing AI Authorship: Towards a System of Icons for Legal and Ethical Disclosure(A. Joseph, Patricia Abril, Alissa del Riego, 2025, SSRN Electronic Journal)
- Authorship Statement for Generative Artificial Intelligence: Assuring Trust and Accountability(Joseph Crawford, Alison Purvis, Averil Grieve, Louise Taylor, 2026, Journal of University Teaching and Learning Practice)
- GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing(Yana Suchikova, N. Tsybuliak, J. A. Teixeira da Silva, Serhii Nazarovets, 2025, Accountability in Research)
- Attributing AI Authorship: Towards a System of Icons for Legal and Ethical Disclosure(A. Joseph, Patricia Abril, Alissa del Riego, 2025, SSRN Electronic Journal)
- The Presence and Nature of AI-Use Disclosure Statements in Medical Education Journals: A bibliometric study(M. Ans, Lauren A. Maggio, Hamza Algodi, Joseph A. Costello, Erik W. Driessen, K. Oswald, Lorelei Lingard, 2025, Perspectives on …)
- Ghost in the Manuscript: Integrity, Authorship, and Artificial Intelligence in the Academy(S. Bartell, 2025, Journal of College and Character)
- Scientific Muse and Misuse: Reevaluating Authorship Attribution and Liability Allocation in the Generative AI Age (Inbar Cohen Ganot, 2026, SSRN Electronic Journal)
- Bibliometric, methodological and reporting characteristics of systematic reviews with explicit AI disclosure statements: an exploratory meta-research study(A. Bastounis, Lukasz Lagojda, William E. A. Sheppard, G.R. Daly, E. Poku, A. Booth, 2026, BMC Medical Research Methodology)
- AI collaboration or cheating? Using explainable authorship verification to measure AI assistance in academic writing(E Oliveira, M Mohoni, S López-Pernas, M Saqr, 2026, Educational Technology & …)
- Authorship Identification of AI-Generated Academic Texts: A Pilot Study in the Russian University Context(R. Zaripova, Andrew V. Danilov, L. L. Salekhova, T. R. Fazliakhmetov, Eduard Krylov, 2025, 2025 18th International Conference on Development in eSystem Engineering (DeSE))
- AI collaboration or cheating? Using explainable authorship verification to measure AI assistance in academic writing(E Oliveira, M Mohoni, S López-Pernas, M Saqr, 2026, Educational Technology & …)
- Authorial Integrity in the Age of Artificial Intelligence: A Scientific Autobiography.(Christopher C. Colenda, 2025, The American Journal of Geriatric Psychiatry)
- Student Perceptions of AI-Assisted Writing and Academic Integrity: Ethical Concerns, Academic Misconduct, and Use of Generative AI in Higher Education(Brady Lund, Nishith Reddy Mannuru, Zoë Abbie Teel, Tae Hee Lee, Nathanlie Ortega, Sara Simmons, Evelyn Ward, 2025, AI in Education)
- An ethics module on academic integrity and generative AI(C. Hill, J. Hargis, 2024, New Directions for Teaching and Learning)
应用边界的技术与制度约束:准确性、隐私保密、可追溯与人类在环
从“应用边界”角度界定生成式AI在写作与出版工作流中的可用范围:覆盖准确性/可核查性(幻觉与不可验证)、隐私与保密、可追溯与证据链、以及人类在环(human-in-the-loop)控制;同时讨论检测与治理工具的工程可行性与局限,并用跨情境(含学科如药学)说明边界落地方式。
- RESPONSIBLE USE OF AI-GENERATED CONTENT IN VIETNAMESE SCHOLARLY PUBLISHING: EVIDENCE FROM JOURNAL POLICIES AND EDITORIAL PRACTICES(Tran Huu Tuyen, 2026, Veredas do Direito)
- Responsible use of large language models in manuscript authorship, peer review, and editorial processes: a Delphi consensus among editors-in-chief of anaesthesia and pain medicine journals (RULE-AP).(A. De Cassai, B. Dost, John G. Augoustides, Leonard Azamfirei, Z. Alanoğlu, Liana Azi, J. A. Calvache, Vladimír Černý, S. D. De Hert, A. Eldawlatly, M. K. Farber, Diogo Sobreira-Fernandes, M. Fettiplace, D. Galante, Rakesh Garg, Heidi V Goldstein, A. Abad-Gurumeta, Lalit Gupta, Hugh C. Hemmings, Christopher A. Jones, M. Hochberg, J. Katz, Hyun Kang, G. Talu, D. Kraychete, R. Landau, Sangseok Lee, Hillary D. Lum, C. Lundgren, T. Makuloluwa, P. Martelletti, Tonya M. Palermo, Philip J. Peyton, P. Poisbeau, J. Rathmell, Antoine Roquilly, S. K. Schwarz, Madhuragauri Shevade, P. Sloan, BobbieJean Sweitzer, Ary Serpa Neto, Philip F Stahel, Zerrin Özköse Şatirlar, Alparslan Turan, Dennis C. Turk, M. Valeriani, Mads U. Werner, Paul J. Young, I. B. Zabolotskikh, K. Zacharowski, Szymon Zdanowski, 2026, British Journal of Anaesthesia)
- Publication Ethics in the Era of Artificial Intelligence(Zafer Koçak, 2024, Journal of Korean Medical Science)
- Practical Considerations and Ethical Implications of Using Artificial Intelligence in Writing Scientific Manuscripts(M. N. Yousaf, 2025, ACG Case Reports Journal)
- Visible Sources and Invisible Risks: Exploring the Impact of Ai Disclosure on Perceived Credibility of Ai-Generated Content(Lin Teng, Yi-Qing Zhang, 2025, Journal of Science Communication)
- AI content detection in the emerging information ecosystem: new obligations for media and tech companies(Alistair Knott, Dino Pedreschi, Toshiya Jitsuzumi, Susan Leavy, D. Eyers, Tapabrata Chakraborti, Andrew Trotman, Sundar Sundareswaran, Ricardo Baeza-Yates, P. Biecek, Adrian Weller, Paul D. Teal, Subhadip Basu, Mehmet Haklidir, Virginia Morini, Stuart Russell, Y. Bengio, 2024, Ethics and Information Technology)
- Consensus on the Application of Generative Artificial Intelligence in Medical Manuscript Writing(Guangtao Huang, Rong Zhong, Giovanna Orsini, Rei Ogawa, Jiming Kong, Jianglin Zhang, Mingxing Lei, Siwei Wang, Yiqiang Zhan, Folke Sjoberg, Ilaria Dal’ Pra, Gaofeng Wang, Xusheng Wang, Xiaozhuo Wu, Liangqiao Gui, Zhe Li, Bronwyn Griffin, Lufeng Ding, David N. Herndon, David D Greenhalgh, Jing-Liang Huan, Qianqian Liu, Lei Jiang, Michael Nilsson, Yikun Ji, Yanni Wang, Fenfang Wu, Yixin Zhang, A. Zamboni, Rui Guo, M. Raucci, Qihui Zhou, Zhaohong Chen, L. P. Vana, Hong Chen, Ren-ju Song, Jianan Li, Jun Wu, 2026, Regenesis Repair Rehabilitation)
- Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda(Kyle Bittle, Omar El-Gayar, 2025, Information)
- Design and evaluation of an LLM literature review assistant(Z Xia, A Bakharia, 2025, ASCILITE 2025)
- Generative artificial intelligence (Gen-AI) in pharmacy education: Utilization and implications for academic integrity: A scoping review(R. Mortlock, C. Lucas, 2024, Exploratory Research in Clinical and Social Pharmacy)
- Nonhuman "Authors" and Implications for the Integrity of Scientific Publication and Medical Knowledge.(A. Flanagin, Kirsten Bibbins-Domingo, M. Berkwits, S. Christiansen, 2023, JAMA)
- AI-Assisted Writing Disclosure Requirements in Academic Publishing: Privacy Rights Versus Institutional Overreach(Yue Liu, 2025, Preprints.org)
期刊与出版治理:允许范围、披露规则与署名责任政策差异
以期刊/出版商政策为核心材料,比较或归纳“允许范围—披露规则—署名/责任—审稿编辑执行逻辑”的差异;重点在于阈值边界模糊、政策不一致如何影响作者行为与出版治理,并体现政策层面的规范落地路径与风险控制。
- A Comparative Analysis of Author Guidelines on the Use of Generative Artificial Intelligence for Manuscript Preparation in the Top 100 Medical Journals(Christos Evangelou, James Marchant, 2025, AMWA Journal)
- The Use of Artificial Intelligence in Neurosurgical Manuscript Writing: Journal Specific Policies and Their Implementation(Brian Carlson, Todd Laffaye, Landon Gray, Abhijith Bathini, Devi Prasad Patra, Anwesha Dubey, 2025, Journal of Clinical …)
- A Comparative Review of Imaging Journal Policies for Use of AI in Manuscript Generation.(Onur Simsek, Amirreza Manteghinejad, A. Vossough, 2024, Academic Radiology)
- Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit(Daivat Bhavsar, L. Duffy, Hamin Jo, C. Lokker, R. B. Haynes, Alfonso Iorio, Ana Marušić, Jeremy Y. Ng, 2024, Research Integrity and Peer Review)
- Nonhuman "Authors" and Implications for the Integrity of Scientific Publication and Medical Knowledge.(A. Flanagin, Kirsten Bibbins-Domingo, M. Berkwits, S. Christiansen, 2023, JAMA)
- Artificial intelligence and authorship editor policy: ChatGPT, Bard Bing AI, and beyond(J Crawford, M Cowling, S Ashton-Hay, 2023, Journal of University …)
- A new policy on the use of artificial intelligence tools for manuscripts submitted to CMAJ(M. Stanbrook, M. Weinhold, Diane Kelsall, 2023, Canadian Medical Association Journal)
- Position statement on artificial intelligence (AI) use in evidence synthesis across Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence 2025(Ella Flemyng, Anne Norl-Storr, Biljana Macura, Gerald Gartlehner, James A. Thomas, Joerg J Meerpohl, Zoe Jordan, Jan C. Minx, Angelika Eisele‐Metzger, Candyce Hamel, Paweł Jemioło, Kylie Porritt, Matthew Grainger, 2025, Cochrane Database Systematic Reviews)
- Ethics statements in Rheumatology journals: present practices and future directions(F. Alnaimat, Salameh Al-Halaseh, Lujain Alzoubi, B. Khraisat, Osama Mohammad Hussein Abu Nassar, 2024, Rheumatology International)
- The blurred threshold of AI-use disclosure: International journal editor expectations of sufficiency and necessity(L. Lingard, E. Driessen, K. Oswald, 2025, medRxiv)
- AI-Written Scientific Manuscripts(Emilio Quaia, 2025, Tomography)
标准化透明度:披露清单、粒度与阈值边界(如GAMER等)
专门讨论“标准化透明度”的具体机制:如何披露、披露到什么粒度、应包含哪些关键要素,并通过清单/共识框架提升披露的可核查性与一致性;同时回应披露阈值模糊问题,形成可执行的报告规范。
- Reporting guideline for the use of Generative Artificial intelligence tools in MEdical Research: the GAMER Statement(Xufei Luo, Y. Tham, M. Giuffré, R. Ranisch, M. Daher, K. Lam, A. V. Eriksen, Che-Wei Hsu, Akihhiko Ozaki, Fabio Ynoe de Moraes, S. Khanna, Kuan-Pin Su, Emir Begagić, Zhaoxiang Bian, Yaolong Chen, J. Estill, 2025, BMJ Evidence-Based Medicine)
- The blurred threshold of AI-use disclosure: International journal editor expectations of sufficiency and necessity(L. Lingard, E. Driessen, K. Oswald, 2025, medRxiv)
- AI-Assisted Writing Disclosure Requirements in Academic Publishing: Privacy Rights Versus Institutional Overreach(Yue Liu, 2025, Preprints.org)
- Using AI to write scholarly publications(Mohammad Hosseini, L. Rasmussen, D. Resnik, 2023, Accountability in Research)
- The blurred threshold of AI-use disclosure: International journal editor expectations of sufficiency and necessity(L. Lingard, E. Driessen, K. Oswald, 2025, medRxiv)
合规辅助与教育/评估重构:工作流集成、人类可控训练与能力建设
强调教育与工作流集成下的“合规辅助使用”实践:如何将LLM用于教学反馈、写作训练与远程学习,同时通过伦理模块与评估重构(从惩戒到能力建设)降低作弊机会并提升学习者的信息素养;其核心是把规范约束转化为课堂可操作的训练、声明与评价机制。
- Design and evaluation of an LLM literature review assistant(Z Xia, A Bakharia, 2025, ASCILITE 2025)
- Giving Credit Where Credit is Due: An Artificial Intelligence Contribution Statement for Research Methods Writing Assignments(Nicole Alea Albada, Vanessa E. Woods, 2024, Teaching of Psychology)
- Design and evaluation of an LLM literature review assistant(Z Xia, A Bakharia, 2025, ASCILITE 2025)
- Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice(K Sevnarayan, MA Potter, 2024, Journal of Applied Learning & …)
- Generative AI in Science Education: A Learning Revolution or a Threat to Academic Integrity? A Bibliometric Analysis(M. D. H. Wirzal, N. A. H. Md Nordin, N. S. Abd Halim, M. A. Bustam, 2024, Jurnal Penelitian dan Pengkajian Ilmu Pendidikan: e-Saintika)
- Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice(K Sevnarayan, MA Potter, 2024, Journal of Applied Learning & …)
- Harnessing the Power of Generative Artificial Intelligence to Promote Academic Integrity(Nadia Koren, Brent A. Anders, 2024, Advances in Educational Marketing, Administration, and Leadership)
- Rethinking Plagiarism in the Era of Generative AI(James Hutson, 2024, Journal of Intelligent Communication)
- Emerging Research and Policy Themes on Academic Integrity in the Age of Chat GPT and Generative AI(Sterling Plata, Maria Ana G. De Guzman, Arthea Quesada, 2023, Asian Journal of University Education)
- An ethics module on academic integrity and generative AI(C. Hill, J. Hargis, 2024, New Directions for Teaching and Learning)
- Student Perceptions of AI-Assisted Writing and Academic Integrity: Ethical Concerns, Academic Misconduct, and Use of Generative AI in Higher Education(Brady Lund, Nishith Reddy Mannuru, Zoë Abbie Teel, Tae Hee Lee, Nathanlie Ortega, Sara Simmons, Evelyn Ward, 2025, AI in Education)
- Academic Integrity In The Age Of Generative AI: Perceptions And Responses Of Vietnamese EFL Teachers(Ngo Cong‐Lem, T. Tran, T. Nguyen, 2024, Teaching English With Technology)
- Harnessing the Power of Generative Artificial Intelligence to Promote Academic Integrity(Nadia Koren, Brent A. Anders, 2024, Advances in Educational Marketing, Administration, and Leadership)
- Using AI to write scholarly publications(Mohammad Hosseini, L. Rasmussen, D. Resnik, 2023, Accountability in Research)
- AI-Assisted Tools for Scientific Review Writing: Opportunities and Cautions(Júlio Silva, Rafael P. Gouveia, Kallil M. C. Zielinski, M. C. F. Oliveira, D. R. Amancio, O. Bruno, Osvaldo N. Oliveira, 2025, ACS Applied Materials & Interfaces)
- Academic integrity in the generative AI (GenAI) era: a collective editorial response(A. Tlili, Melissa Bond, Aras Bozkurt, Khalid H. Arar, T. Chiu, Pericles 'Asher' Rospigliosi, 2025, Interactive Learning Environments)
正当应用的场景化落地:写作/综述生成管线与学科风险缓释
聚焦“正当使用与场景化落地”的应用管线:例如学术写作/综述生成的流程性用法、以及特定学科情境(如药学教育)中如何进行风险缓释与培训—评估耦合,从而限定生成式AI在何种任务上更适合“辅助而非替代”。
- Harnessing the Power of Generative Artificial Intelligence to Promote Academic Integrity(Nadia Koren, Brent A. Anders, 2024, Advances in Educational Marketing, Administration, and Leadership)
- Using AI to write scholarly publications(Mohammad Hosseini, L. Rasmussen, D. Resnik, 2023, Accountability in Research)
- AI-Assisted Tools for Scientific Review Writing: Opportunities and Cautions(Júlio Silva, Rafael P. Gouveia, Kallil M. C. Zielinski, M. C. F. Oliveira, D. R. Amancio, O. Bruno, Osvaldo N. Oliveira, 2025, ACS Applied Materials & Interfaces)
- Academic integrity in the generative AI (GenAI) era: a collective editorial response(A. Tlili, Melissa Bond, Aras Bozkurt, Khalid H. Arar, T. Chiu, Pericles 'Asher' Rospigliosi, 2025, Interactive Learning Environments)
- Generative artificial intelligence (Gen-AI) in pharmacy education: Utilization and implications for academic integrity: A scoping review(R. Mortlock, C. Lucas, 2024, Exploratory Research in Clinical and Social Pharmacy)
规范演进与实践现状:政策采用差异、基础设施变迁与持续更新
用于说明制度与实践的演进:随着技术快速扩散与基础设施(如检测/平台政策)变迁,披露规则、作者性治理与诚信政策需要持续更新;该组体现“现状测绘—差距识别—规范迭代”的研究取向。
- Writing for Machines, Formatting Originality: Plagiarism Detection and the Automation of Authorship(Ethan Stoneman, Joseph C. Packer, 2026, Theory, Culture & Society)
- Impact of generative AI in academic integrity and learning outcomes: A case study in the Upper East Region(JK Wiredu, N Seidu Abuba, 2024, Asian Journal of Research …)
- The rapid rise of generative AI and its implications for academic integrity: Students' perceptions and use of chatbots for assistance with assessments(Jan Henrik Gruenhagen, P. Sinclair, Julie-Anne Carroll, Philip R.A. Baker, Ann Wilson, Daniel Demant, 2024, Computers and Education: Artificial Intelligence)
- The impact of generative AI on academic integrity of authentic assessments within a higher education context(Alexander K. Kofinas, C. Tsay, D. Pike, 2025, British Journal of Educational Technology)
- The Presence and Nature of AI-Use Disclosure Statements in Medical Education Journals: A bibliometric study(M. Ans, Lauren A. Maggio, Hamza Algodi, Joseph A. Costello, Erik W. Driessen, K. Oswald, Lorelei Lingard, 2025, Perspectives on …)
- Variability of Guidelines and Disclosures for AI-Generated Content in Top Surgical Journals(Sina J. Torabi, Michael J. Warn, B. Bitner, Y. Haidar, T. Tjoa, E. C. Kuan, 2024, Surgical Innovation)
合并后的统一分组将“生成式AI在学术写作中的规范约束与应用边界”相关研究按八条并列主线组织:从学术诚信与伦理失范机理(含检测困境)、作者性与责任可问责框架、以及期刊/出版政策与标准化披露清单(阈值与粒度)出发,进一步界定AI可用范围的技术/制度边界(准确性、隐私、可追溯、人类在环);同时通过教育与评估重构实现规范内化,并在特定任务与学科中提出场景化正当使用管线;最后以制度演进与实践测绘解释政策落地存在差距且需要持续迭代。整体指向:技术能力只是底座,真正的合规与可信赖依赖人类可控性、可核查的透明披露与严格的编辑/作者责任治理。
总计77篇相关文献
… around ChatGPT and Academic integrity and concludes that although … academia, the way ChatGPT and other generative AI systems are used could surely undermine academic integrity…
This systematic literature review rigorously evaluates the impact of Generative AI (GenAI) on academic integrity within higher education settings. The primary objective is to synthesize how GenAI technologies influence student behavior and academic honesty, assessing the benefits and risks associated with their integration. We defined clear inclusion and exclusion criteria, focusing on studies explicitly discussing GenAI’s role in higher education from January 2021 to December 2024. Databases included ABI/INFORM, ACM Digital Library, IEEE Xplore, and JSTOR, with the last search conducted in May 2024. A total of 41 studies met our precise inclusion criteria. Our synthesis methods involved qualitative analysis to identify common themes and quantify trends where applicable. The results indicate that while GenAI can enhance educational engagement and efficiency, it also poses significant risks of academic dishonesty. We critically assessed the risk of bias in included studies and noted a limitation in the diversity of databases, which might have restricted the breadth of perspectives. Key implications suggest enhancing digital literacy and developing robust detection tools to effectively manage GenAI’s dual impacts. No external funding was received for this review. Future research should expand database sources and include more diverse study designs to overcome current limitations and refine policy recommendations.
… violations of students' academic integrity. However, research is … 37 articles on academic integrity in the Age of Gen AI and … of artificial intelligence tools on students’ intellectual integrity …
Introduction Generative artificial intelligence (Gen-AI), exemplified by the widely adopted ChatGPT, has garnered significant attention in recent years. Its application spans various health education domains, including pharmacy, where its potential benefits and drawbacks have become increasingly apparent. Despite the growing adoption of Gen-AIsuch as ChatGPT in pharmacy education, there remains a critical need to assess and mitigate associated risks. This review exploresthe literature and potential strategies for mitigating risks associated with the integration of Gen-AI in pharmacy education. Aim To conduct a scoping review to identify implications of Gen-AI in pharmacy education, identify its use and emerging evidence, with a particular focus on strategies which mitigate potential risks to academic integrity. Methods A scoping review strategy was employed in accordance with the PRISMA-ScR guidelines. Databases searched includedPubMed, ERIC [Education Resources Information Center], Scopus and ProQuestfrom August 2023 to 20 February 2024 and included all relevant records from 1 January 2000 to 20 February 2024 relating specifically to LLM use within pharmacy education. A grey literature search was also conducted due to the emerging nature of this topic. Policies, procedures, and documents from institutions such as universities and colleges, including standards, guidelines, and policy documents, were hand searched and reviewed in their most updated form. These documents were not published in the scientific literature or indexed in academic search engines. Results Articles (n = 12) were derived from the scientific data bases and Records (n = 9) derived from the grey literature. Potential use and benefits of Gen-AI within pharmacy education were identified in all included published articles however there was a paucity of published articles related the degree of consideration to the potential risks to academic integrity. Grey literature recordsheld the largest proportion of risk mitigation strategies largely focusing on increased academic and student education and training relating to the ethical use of Gen-AI as well considerations for redesigning of current assessments likely to be a risk for Gen-AI use to academic integrity. Conclusion Drawing upon existing literature, this review highlights the importance of evidence-based approaches to address the challenges posed by Gen-AI such as ChatGPT in pharmacy education settings. Additionally, whilst mitigation strategies are suggested, primarily drawn from the grey literature, there is a paucity of traditionally published scientific literature outlining strategies for the practical and ethical implementation of Gen-AI within pharmacy education. Further research related to the responsible and ethical use of Gen-AIin pharmacy curricula; and studies related to strategies adopted to mitigate risks to academic integrity would be beneficial.
Generative AI (hereinafter GenAI) technology, such as ChatGPT, is already influencing the higher education sector. In this work, we focused on the impact of GenAI on the academic integrity of assessments within higher education institutions, as GenAI can be used to circumvent assessment approaches within the sector, compromising their quality. The purpose of our research was threefold: first, to determine the extent to which the use of GenAI can be detected via the marking and moderation process; second, to understand whether the presence of GenAI affects the marking process; and finally, to establish whether authentic assessments can safeguard academic integrity. We used a series of experiments in the context of two UK‐based universities to examine these issues. Our findings indicate that markers, in general, are not able to distinguish assessments that have had GenAI input from assessments that did not, even though the presence of GenAI affects the way markers approach the marking process. Our findings also suggest that the level of authenticity in an assessment has no impact on the ability to safeguard against or detect GenAI usage in assessment creation. In conclusion, we suggest that current approaches to assessments in higher education are susceptible to GenAI manipulation and that the higher education sector cannot rely on authentic assessments alone to control the impact of GenAI on academic integrity. Thus, we recommend giving more critical attention to assessment design and placing more emphasis on assessments that rely on social experiential learning and are performative rather than output‐based and asynchronously written. What is already known about this topic GenAI has enabled students to complete higher education assessments quickly and with good quality, leading to challenges in academic integrity. GenAI has transformed the requirements and considerations in assessment design in higher education. Authentic assessments are seen as a prominent way to tackle the GenAI challenge. What this paper adds We provide quantitative and qualitative experimental evidence suggesting that GenAI can generate authentic assessments that pass the scrutiny of experienced academics. We demonstrate how the use of authentic assessments alone does not protect the academic integrity of students in higher education. Our qualitative analysis indicates that markers may generate false positive and false negative results if they suspect GenAI tampering in an assessment. Thus, students' learning is not assessed correctly. Implications for practice and/or policy When universities and national organisations design policies regarding GenAI, authentic assessments are not the panacea; the focus must remain on assessment design. Assessments of learning need to shift from assessing output to focusing on process and relevance to the workplace. That would mean a paradigmatic shift from written assessments to synchronous interpersonal assessments. The move away from written assessments has implications that are far reaching for the academy if written assessments cannot be trusted as a reliable indicator for and of learning.
This article explores the intersection between academic integrity and generative AI (GenAI). It presents a tested framework for a versatile 3‐h module applicable to various disciplines. Since ChatGPT's emergence, GenAI's impact on academic integrity has raised concerns, challenged established norms, and blurred lines of authorship. Engaging students in this topic encourages critical reflection and ethical use of these technologies. This approach draws on experiential learning and student–faculty partnership approaches to activities and assessments, providing students with a platform to not only navigate the responsible application of GenAI in assignments but also foster a dialogue between students and faculty on crafting effective policies for GenAI use.
… Generative artificial intelligence (GenAI) has reshaped distance … AI usage. This study explores the transformative impact of GenAI in distance learning and focuses on academic integrity …
… The rapid adoption of generative AI tools such as … by academics about potential threats to academic integrity. This paper contributes to the pressing discussion about responses to AI …
The integration of generative artificial intelligence (AI) technologies, such as GPT‐3, Wordtune, and Jenni, into academic settings has revolutionized content creation, raising significant questions about authorship and originality. While AI offers benefits in efficiency and productivity, it presents substantial challenges to academic integrity. This paper examines these challenges and the need for new frameworks and policies to ensure ethical AI use.
… its use also raises concerns about academic integrity, data privacy, … values while embracing the transformative potential of AI. … Moreover, AI-powered tools facilitate accessibility through …
… , this chapter considers the impact of AI and academic integrity in higher education in relation to its … of academic integrity and artificial intelligence (AI), focusing on generative AI writing …
Academic integrity is essential in educational institutes to ensure equal and fair learning. However, the increasing use of generative artificial intelligence (AI) in education has generated concerns about its integrity. The use of artificial intelligence (AI) to complete assignments, projects, and evaluations has raised concerns about unfair benefits and the legitimacy of the work being submitted. This study highlights the need for rules and systems to protect academic integrity as AI expands in education. This study attempts to evaluate AI-generated solutions at high school, college, and graduate levels based on three important metrics: Fulfillment, Presentation, and Helpfulness. This study implements an experiment using the proposed framework, and the findings show that the used generative AI models exceeds human-like behavior. Through this study, it was observed that there is a risk of bias against those with real efforts.
Academic Integrity In The Age Of Generative AI: Perceptions And Responses Of Vietnamese EFL Teachers
This study examines the perceptions and responses of Vietnamese teachers of English as a Foreign Language (EFL) to academic integrity concerns that arise from the use of AI, specifically chatbots like ChatGPT, in foreign language education. The study employed an open-ended survey to collect data from 31 Vietnamese EFL teachers who were asked to share their views on AI-based academic dishonesty, identify perceived causes, outline consequences for students engaging in AI-based plagiarism, and articulate their pedagogical responses to the issue. The study found that teachers primarily attributed students’ AI-driven plagiarism to a deficiency in original ideas, poor learning attitudes and motivation, and students’ linguistic competencies. The over-reliance on AI was identified as a hindrance to the development of knowledge and skills such as critical thinking and language proficiency. In response to academic dishonesty, teachers advocated for increased regulations, the implementation of AI-based plagiarism detectors, and education on responsible AI use. The findings underscore the importance of adapting language teaching pedagogies and assessments to incorporate personalised learning and process-oriented teaching approaches that support critical thinking and genuine learning motivation. The insights derived from this research contribute to a deeper understanding of EFL educators’ perspectives, offering valuable input for the development of policies and practices aimed at promoting academic integrity in the AI era.
The integration of generative artificial intelligence (AI) in Science, Technology, Engineering, and Mathematics (STEM) education presents transformative opportunities alongside significant challenges. This study investigates the dual impact of generative AI on STEM learning outcomes and academic integrity through a comprehensive bibliometric analysis employing co-citation, keyword analysis, and trend mapping. The results reveal that AI tools such as ChatGPT have revolutionized personalized learning by offering tailored feedback, enhancing critical thinking, and improving student engagement. However, these advancements are tempered by concerns over academic misconduct, particularly plagiarism, and the erosion of essential cognitive skills due to overreliance on AI-generated content. Ethical considerations remain critical, necessitating the development of robust policies and ethical frameworks to safeguard academic integrity. Beyond educational settings, the findings suggest broader applicability to professional training and skills development, as the benefits and challenges of AI extend beyond coursework. This research provides valuable insights for educators, policymakers, and researchers, advocating for a balanced approach to AI integration that maximizes its potential while preserving educational standards.
… The release of ChatGPT has sparked significant academic integrity concerns in higher education. However, some commentators have pointed out that generative artificial intelligence (AI…
Abstract This study aims to understand the factors influencing academic integrity in the age of generative artificial intelligence (GenAI) through the lens of the Theory of Planned Behavior (TPB). A seven-factor measurement model was developed, hypothesizing the relationship between TPB constructs – e.g., perceived behavioral control (PBC), past behavior (PB), moral obligations (MO), social norms (SN), information literacy (IL)—and GenAI adoption. It was tested using structural equation modeling (SEM). The findings showed that both PBC (β = 0.223, CR = 4.234, p=<0.05) and SN (β = 0.508, CR = 5.644, p=<0.05) significantly increased behavioral intention. MO (β = -0.273, CR = -4.234, p=<0.05) and IL (β = -0.253, CR = -3.386, p=<0.05) helped uphold academic integrity. A strong association was found between behavioral intention (BI) and GenAI adoption (AD) for academic dishonesty (β = 0.651, CR = 11.780, p = 0.000). The findings suggest that protecting academic integrity needs a multi-tiered approach, including implementing AI literacy programs, research, and critical thinking training, implementing strong plagiarism detection tools, developing and maintaining comprehensive and adaptable guidelines for the ethical use of GenAI, citation management training, and cultivating a culture valuing academic honesty and originality.
The rise of generative AI in higher education has disrupted our traditional understandings of academic integrity, moving our focus from clear-cut infractions to evolving ethical judgment. In this study, 401 students from major U.S. universities provide insight into how beliefs, behaviors, and policy awareness intersect in shaping how students interact with AI-assisted writing. The findings indicate that students’ ethical beliefs – not institutional policies – are the strongest predictors of perceived misconduct and actual AI use in writing. Policy awareness was found to have no significant effect on ethical judgments or behavior. Instead, students who believe AI writing is cheating were found to be substantially less likely to view it as ethical or engage with it. These findings suggest that many students do not treat AI use in learning activities as an extension of conventional cheating (e.g., plagiarism), but rather as a distinct category of academic conduct/misconduct. Rather than using punitive models to attempt to punish students for using AI, this study suggests that education about AI ethics and the risk of AI overreliance may prove more successful for curbing unethical AI use in higher education.
This chapter explores how generative artificial intelligence, which poses a threat to academic integrity, can also be used to address some of the root causes of breaches in academic integrity. The root causes discussed are primarily derived from the largest self-reported academic integrity survey conducted across Australia's leading universities. Some of these root causes that the chapter will address include dissatisfaction with the teaching and learning environment, challenges related to English as an additional language, and the perception of easy access to cheating opportunities. By embracing generative artificial intelligence and incorporating it into the classroom, educators are afforded the opportunity to discuss its acceptable use in assessments and to bring attention to the academic integrity policies and procedures of the institution.
… This section presented the findings of our investigation into the impact of Generative Artificial Intelligence (AI) on academic integrity and learning outcomes in three selected universities …
ABSTRACT Currently there is a broad consensus among scholars that artificial intelligence (AI) tools can be used in research and publication, and that their use should be disclosed. Publishers and influential organizations, like the International Committee of Medical Journal Editors, have developed different and sometimes contradictory disclosure policies. We review some of these policies, examine the ethical reasons for disclosing AI use in research, and develop a framework for disclosure. We distinguish between mandatory, optional, and unnecessary disclosure of AI use, arguing that disclosure should be mandatory only when AI use is intentional and substantial. AI use is intentional when it is directly employed with a specific goal or purpose in mind. AI use is substantial when it 1) produces evidence, analysis, or discussion that supports or elaborates on the conclusions/findings of a study; or 2) directly affects the content of the research/publication. To support the application of our framework, we state three criteria for identifying substantial AI uses in research: a) using AI to make decisions that directly affect research results; b) using AI to generate content, data or images; and c) using AI to analyze content, data or images. Disclosure should be mandatory when AI use meets one of these criteria.
… AI disclosure policies assessed disclosure rates of AI usage in the radiology research literature as well as the effect of manuscript characteristics on disclosure … generative AI adoption …
Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors’ responsible use of AI chatbots. This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023—December 2023). Data was categorized into policy elements, such as ‘proofreading’ and ‘image generation’. Counts and percentages of ‘yes’ (i.e., permitted), ‘no’, and ‘no available information’ (NAI) were established for each policy element. A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors’ use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors. Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12–18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.
PURPOSE To assess the prevalence of Artificial Intelligence (AI) usage policies in manuscript writing in PubMed-indexed ophthalmology journals and examine the relationship between adoption of these policies and journal characteristics. DESIGN Cross-sectional study SUBJECTS: PubMed-indexed ophthalmology journals MAIN OUTCOME MEASURES: Prevalence of policies in journal guidelines regarding the use of AI in manuscript writing. METHODS We reviewed the guidelines of 84 ophthalmology journals indexed in PubMed to determine the presence of AI-use policies for manuscript generation. We further compared journal metrics, such as CiteScore, Journal Impact Factor (JIF), Journal Citation Indicator (JCI), Source Normalized Impact per Paper (SNIP), and SCImago Journal Rank (SJR), between journals with and without AI policies. Additionally, we analyzed the association between AI policy adoption and journal characteristics, such as MEDLINE indexing and society affiliation. RESULTS Among the 84 journals, 53 (63.1%) had AI policies for manuscript generation, with no significant changes observed during the study period. Journals indexed in MEDLINE were significantly more likely to have AI policies (68.8%) than non-MEDLINE-indexed journals, where no AI policies were found (0%) (p = 0.0008). There was no significant difference in AI policy adoption between society-affiliated (62.7%) and unaffiliated journals (64.7%) (p = 0.8443). Journals with AI policies had significantly higher metrics, including CiteScore, SNIP, SJR, JIF, and JCI (p < 0.05). CONCLUSIONS While many ophthalmology journals have adopted AI policies, the lack of guidelines in over one-third of journals highlights a critical need for consistent and comprehensive AI policies, particularly as the AI landscape rapidly advances.
… policies and disclose their use of AI in the manuscript writing process. Finally, we determine whether disclosing AI … Finally, we demonstrate that disclosure of AI use for manuscript writing …
Abstract Generative Artificial Intelligence (GenAI) tools are increasingly integrated into research and academic writing, offering opportunities to streamline workflows and increase productivity. However, these tools also introduce risks when used uncritically, unethically, or without transparency. In particular, the undisclosed use of GenAI, now widely documented, may compromise research integrity. The aim of this AMEE Guide is to provide researchers with practical guidance on when and how to disclose the use of GenAI in scholarly writing. Specifically, we propose a clear framework to promote ethical GenAI use and reporting practices in health professions education research. We start with an exploration of key aspects of responsible use of GenAI in publishing (e.g. authorship, verification and responsibility, plagiarism and bias, data privacy and confidentiality, journal requirements). We then address the importance of transparency about GenAI use in research production, both within research teams (internal disclosure) and to journals and readers (external disclosure). With respect to the latter, we highlight the need to be aware of journal-specific guidance and offer guiding principles for effective disclosure. Central to these principles is the call for scholars to provide a candid description of how GenAI was used, allowing readers to understand how the model shaped the research and writing processes. We also briefly consider the use and disclosure of GenAI in peer review. Given that, at the time of writing this Guide (November 2025), many questions remain regarding AI use and disclosure for publishing, we conclude with reflections on future developments and directions for research.
RATIONALE AND OBJECTIVES Artificial intelligence (AI) technologies are rapidly evolving and offering new advances almost on a day-by-day basis, including various tools for manuscript generation and modification. On the other hand, these potentially time- and effort-saving solutions come with potential bias, factual error, and plagiarism risks. Some journals have started to update their author guidelines in reference to AI-generated or AI-assisted manuscripts. The purpose of this paper is to evaluate author guidelines for including AI use policies in radiology journals and compare scientometric data between journals with and without explicit AI use policies. MATERIALS AND METHODS This cross-sectional study included 112 MEDLINE-indexed imaging journals and evaluated their author guidelines between 13 October 2023 and 16 October 2023. Journals were identified based on subject matter and association with a radiological society. The authors' guidelines and editorial policies were evaluated for the use of AI in manuscript preparation and specific AI-generated image policies. We assessed the existence of an AI usage policy among subspecialty imaging journals. The scientometric scores of journals with and without AI use policies were compared using the Wilcoxon signed-rank test. RESULTS Among 112 MEDLINE-indexed radiology journals, 80 journals were affiliated with an imaging society, and 32 were not. 69 (61.6%) of 112 imaging journals had an AI usage policy, and 40 (57.9%) of 69 mentioned a specific policy about AI-generated figures. CiteScore (4.9 vs 4, p = 0.023), Source Normalized Impact per Paper (1.12 vs 0.83, p = 0.06), Scientific Journal Ranking (0.75 vs 0.54, p = 0.010) and Journal Citation Indicator (0.77 vs 0.62, p = 0.038) were significantly higher in journals with an AI policy. CONCLUSION The majority of imaging journals provide guidelines for AI-generated content, but still, a substantial number of journals do not have AI usage policies or do not require disclosure for non-human-created manuscripts. Journals with an established AI policy had higher citation and impact scores.
… journal-level explicit policies and 22 (57.9%) … AI in manuscript preparation, with most prohibiting its inclusion as an author. Most journals allow but mandate transparent disclosure of AI …
ABSTRACT Background: Disclosure of AI use is seen as a sign of the author’s honesty and commitment to the principle of transparency. However, existing discussions have paid little attention to a special case: authors who honestly disclose their use of AI feel ashamed because of their honesty. Methods and Results: The main issue discussed in this paper is why authors experience shame in the process of responsible disclosure of AI use. We redefine this emotion and its causes from the perspective of moral emotions. We argue that current disclosure policies only emphasize honesty but do not address how this honesty should be fairly treated. Conclusions: Current disclosure guidelines should ensure that authors feel more secure when disclosing AI use honestly in academic papers, thereby promoting an effective and responsible culture of disclosure. This requires more constructive narrative support. Expressing appreciation and respect for the honesty represented by disclosure is an appropriate way to address the issues discussed in this paper.
… authors to disclose any use of AI, including LLMs … policies should cultivate an ethos of disclosure and responsible use. For AI & Society, we propose five policy measures: (1) disclosure …
… , transparent disclosure of AI use, and full human accountability with explicit prohibition of … , we conducted a systematic review of AI-related editorial policies across 15 leading publishing …
We conducted a cross-sectional analysis of author guidelines from the top 100 medical journals by SCImago Journal Rank to evaluate the coverage and content of policies related to generative artificial intelligence (GAI). Among the journals analyzed (median impact factor, 24.8), 76% permitted GAI for language editing, whereas fewer allowed it for drafting text (26%), figure or table creation (22%), or data analysis (12%). Most journals (78%) explicitly prohibited the use of GAI to generate entire manuscripts. Disclosure of GAI use was required by 78% of journals, although only 16% provided specific disclosure formats. Most journals (80%) assigned responsibility for final content to human authors and prohibited listing GAI as an author. Only 33% of journals referenced external ethical frameworks, with the International Committee of Medical Journal Editors (ICMJE; 16%) and Committee on Publication Ethics (COPE; 12%) being the most commonly referenced. Publisher identity strongly predicted policy adoption across all dimensions (Cramér’s V > 0.8 for multiple policy areas). Moreover, geographic region was moderately associated with GAI policies. However, journal impact metrics showed limited correlation with GAI policy stringency. Permitting a broader use of GAI, especially for language editing and manuscript generation, was strongly correlated with mandatory disclosure requirements. Although most medical journals have established GAI policies, significant gaps remain in comprehensiveness and specificity. The strong publisher-driven pattern suggests opportunities for developing harmonized, specialty-specific standards.
… of such a disclosure in the methods section could be: “In writing this manuscript, MH used … We encourage the editors of other journals to consider adopting policies on the use of AI in …
This editorial provides insights on AI-written scientific manuscripts which represent an increasingly frequent phenomenon that must be managed by authors, reviewers and journal editors [...].
The growing accessibility and sophistication of arti fi cial intelligence (AI) tools have transformed many areas of research, including scienti fi c writing. AI tools, such as natural language processing models and machine learning-based writing assistants, are increasingly used to help draft, edit, and re fi ne scienti fi c manuscripts. However, the use of AI in the writing process introduces both legal and ethical challenges. Various guidelines and policies have emerged, particularly from academic publishers, aimed at ensuring transparency and maintaining the integrity of scienti fi c work. This editorial aimed to provide guidance for researchers on the ethical and practical considerations regarding the use of AI in writing scienti fi c manuscripts, focusing on institutional policies, authorship accountability, intellectual property concerns, plagiarism issues, and image integrity.
… At article submission, CMAJ requires authors to disclose any use of artificial intelligence (AI)–assisted technologies (eg, large language models, chatbots, image creators) in any aspect …
… To address concerns about the use of AI and language models in the writing of manuscripts, JAMA and the JAMA Network journals have updated relevant policies in the journals’ …
Abstract As generative artificial intelligence (AI) becomes increasingly embedded in scholarly writing, the academy faces a pivotal ethical challenge: how to distinguish support from substitution and collaboration from concealment. This essay argues that authorship must remain a marker of intellectual labor and epistemic accountability, even as AI tools reshape the mechanics of composition. Drawing on recent scholarship and pedagogical frameworks, it calls for principled clarity in institutional policy, editorial standards, and instructional practice. The metaphor of the “ghost in the manuscript” anchors a broader critique of silent AI participation, urging scholars to name, disclose, and interrogate algorithmic influence. Without such traceability, the scholarly record risks mistaking fluency for insight and polish for rigor. If we fail to act, we may preserve the manuscript’s surface while allowing the author to vanish.
… students in writing short literature reviews. The system allows students to upload academic papers, write drafts, and receive rubric-aligned feedback from a Large Language Model (LLM)…
Journalists rely on their agency—the ability to exercise independent judgment in alignment with their values—to fulfill their democratic social role. In this study, we investigate how LLM-infused writing tools reshape journalists’ agency in editorial decision making. In interviews with 20 science journalists, we presented four hypothetical LLM-infused writing tools representing a range of possible design space configurations. We find that journalists are selectively willing to cede control: they view AI that gathers information or offers feedback as supporting their efficiency by automating execution while leaving decision making intact. In contrast, they see AI that generates core ideas or drafts as a threat to their autonomy, skill development, self-fulfillment, and professional relationships. This sensitivity extends to seemingly automatable tasks such as manipulating writing voice with AI, which are seen as reducing opportunities for reflection and critical thinking. We discuss the implications of these findings for design that preserves journalistic agency in the moment, and over the long term.
The evolution of large language models (LLMs) is reshaping the landscape of scientific writing, enabling the generation of machine-written review papers with minimal human intervention. This paper presents a pipeline for the automated production of scientific survey articles using Retrieval-Augmented Generation (RAG) and modular LLM agents. The pipeline processes user-selected literature or citation network-derived corpora through vectorized content, reference, and figure databases to generate structured, citation-rich reviews. Two distinct strategies are evaluated: one based on manually curated literature and the other on papers selected through citation network analysis. Results demonstrate that increasing the input materials’ diversity and quantity improves the generated output’s depth and coherence. Although current iterations produce promising drafts, they fail to meet top-tier publication standards, particularly in critical analysis and originality. Results were obtained for a case study on a particular topic, namely, Langmuir and Langmuir–Blodgett films, but the proposed pipeline applies to any user-selected topic. The paper concludes with suggestions of how the system could be enhanced through specialized modules and discusses broader implications for scientific publishing, including ethical considerations, authorship attribution, and the risk of review proliferation. This work represents an opportunity to discuss the advantages and pitfalls introduced by the possibility of using AI assistants to support scientific knowledge synthesis.
… are leveraging LLM to enhance their writing, journals are leveraging LLM to detect LLM-… A good prompt can guide the LLM to act as a journal submission checker, performing tasks …
Researchers have been using generative artificial intelligence (GenAI) to support writing manuscripts for several years now. However, as GenAI evolves and scientists are using it more frequently, the case for mandatory disclosure of GenAI for writing assistance continues to diverge from the initial justifications for disclosure, namely (1) preventing researchers from taking credit for work done by machines; (2) enabling other researchers to critically evaluate a manuscript and its specific claims; and (3) helping editors determine if a submission satisfies their editorial policies. Our initial position (communicated through previous publications) regarding GenAI use for writing assistance was in favor of mandatory disclosure. Nevertheless, as we show in this paper, we have changed our position and now support instituting a voluntary disclosure policy because currently (1) the credit due to machines for assisting researchers is moving below the threshold of requiring recognition; (2) it is impractical (if not impossible) to accurately specify what parts of the text are human-/GenAI-generated; and (3) disclosures could increase biases against non-native speakers of the English language and compromise the integrity of the peer review system. Consequently, we argue, it should be up to the authors of manuscripts to disclose their use of GenAI for writing assistance. For example, in disciplines where writing is the hallmark of originality, or when authors believe disclosure is beneficial, a voluntary checkbox in manuscript submission systems, visible only after publication (rather than a free-text note in the manuscripts) would be preferable.
… for transparency regarding AI's role in legal authorship, we … risks underlying hidden Al authorship and empirically examine … empirical research on Al authorship disclosure. Our study …
… AI might lower the threshold for misconduct. These proposals may reaffirm existing standards to prevent misconduct … Regardless of this, we must reconfirm these fundamentals in the AI/…
The use of Generative AI (GenAI) in scientific writing has grown rapidly, offering tools for manuscript drafting, literature summarization, and data analysis. However, these benefits are accompanied by risks, including undisclosed AI authorship, manipulated content, and the emergence of papermills. This perspective examines two key strategies for maintaining research integrity in the GenAI era: (1) detecting unethical or inappropriate use of GenAI in scientific manuscripts and (2) using AI tools to identify mistakes in scientific literature, such as statistical errors, image manipulation, and incorrect citations. We reviewed the capabilities and limitations of existing AI detectors designed to differentiate human-written (HWT) from machine-generated text (MGT), highlighting performance gaps, genre sensitivity, and vulnerability to adversarial attacks. We also investigate emerging AI-powered systems aimed at identifying errors in published research, including tools for statistical verification, citation validation, and image manipulation detection. Additionally, we discuss recent publishing industry initiatives to combat AI-driven papermills. Our investigation shows that these developments are not yet sufficiently accurate or reliable yet for use in academic assessment, they mark an early but promising steps toward scalable, AI-assisted quality control in scholarly publishing.
Recent research on generative artificial intelligence has primarily focused on two separate issues: (1) the attribution of copyright authorship and ownership, and (2) the allocation of liability for harms resulting from artificial intelligence (“AI”) outputs. However, there is a significant but often-overlooked dimension: the interplay between authorship attribution and liability allocation in assisted scientific research. Therefore, this Article examines the similarities and differences between intellectual property and tort law, highlighting how generative AI challenges long-standing assumptions in both fields and encouraging a reevaluation of scientific standards, liability regimes, and governance of AI. Drawing on comparative legal analysis, ethical guidelines, and a case study of MIT’s AI-driven antibiotic discovery, this Article develops a unified analytical framework for intellectual property and tort law that positions “control” as the cornerstone of both authorship and liability. This framework reveals how different actors - researchers, institutions, AI developers, and AI companies - exercise varying degrees of control over AI-assisted scientific research. This Article does not suggest that AI itself should be recognized as an author, but it contemplates the circumstances in which it may be appropriate for AI companies and developers to be acknowledged as coauthors and, accordingly, bear liability for misconduct. This Article argues for developing a unified analytical framework that bridges the gap between copyright and tort law. Such a framework would provide policymakers, scientific institutions, and academic journals with a comprehensive toolkit for rethinking current authorship criteria, liability regimes, and ethical guidelines.
The application of new technologies, such as artificial intelligence (AI), to science affects the way and methodology in which research is conducted. While the responsible use of AI brings many innovations and benefits to science and humanity, its unethical use poses a serious threat to scientific integrity and literature. Even in the absence of malicious use, the Chatbot output itself, as a software application based on AI, carries the risk of containing biases, distortions, irrelevancies, misrepresentations and plagiarism. Therefore, the use of complex AI algorithms raises concerns about bias, transparency and accountability, requiring the development of new ethical rules to protect scientific integrity. Unfortunately, the development and writing of ethical codes cannot keep up with the pace of development and implementation of technology. The main purpose of this narrative review is to inform readers, authors, reviewers and editors about new approaches to publication ethics in the era of AI. It specifically focuses on tips on how to disclose the use of AI in your manuscript, how to avoid publishing entirely AI-generated text, and current standards for retraction.
The requirement for authors to disclose AI assistance in academic writing represents an expansion of academic misconduct definitions beyond their legitimate scope. This study examines current journal policies mandating AI disclosure, argues for author privacy rights in tool usage, and demonstrates how such requirements disproportionately impact authors with innovative theoretical contributions. Through analysis of recent policy developments and documented cases of systematic rejection patterns, we show that AI disclosure requirements serve as mechanisms for institutional gatekeeping rather than genuine integrity protection. The paper proposes that authors possess inherent privacy rights regarding their writing processes, analogous to historical acceptance of computational aids, and that editorial focus should remain on content quality rather than production methods.
Generative artificial intelligence (GenAI) has accelerated the production of scholarly text, images, and analytic outputs, while simultaneously destabilising long-standing cues used to infer human authorship and scholarly accountability. As a result, manuscripts increasingly arrive with unclear boundaries between human contribution, tool-assisted editing, and tool-generated content, and these distinctions are rarely made explicit. This creates a veracity problem for readers and reviewers, uneven risk for authors, and governance challenges for journals seeking consistent peer review and editorial decision-making. This note articulates an updated and enforceable authorship position for the Journal of University Teaching and Learning Practice (JUTLP), responding to five evolutions since our 2023 stance. These new evolutions since 2023 include: GenAI’s entangled and multimodal integration into scholarly workflows, partial convergence in publishing standards, heightened confidentiality and data governance risks, the post-plagiarism imperative to prioritise transparency over detection, and the increasing complexity of defining what constitutes ‘AI use’. We set out six commitments covering: specific disclosure requirements, prohibition of GenAI generating the manuscript’s substantive scholarly contribution, human centrality and confidentiality in peer review, conditions for transparent use of synthetic media, mandatory reflexivity when GenAI is used in methods or analysis, and the non-transferability of accountability away from named authors. This position aims to preserve trust in the scholarly record by making responsibility legible again.
… for authorship integrity, AI transparency, and … by AI without disclosure are classified as misconduct under DFAS-EEP-RR (Alaali, 2025c) and trigger inclusion in the Reviewer Misconduct …
The rapid integration of generative artificial intelligence into scientific publishing is reshaping how academic text can be produced, revised, and scaled. While transparent and limited use of AI for language support may be acceptable, a new structural vulnerability may be emerging in medical publishing: the large-scale production of short, plausible, and weakly individualized correspondence across multiple specialties. In this viewpoint, we describe and conceptualize a pattern that may be termed synthetic authorship, defined not as undisclosed AI use alone, but as a reproducible mode of scholarly output structurally facilitated by automation. We focus particularly on letters to the editor, a format that combines brevity, rapid editorial handling, and formal indexation, and may therefore be especially exposed to this phenomenon. Based on recurring patterns observed in PubMed-indexed literature, including unusually high publication velocity, abrupt thematic dispersion, and stylistic uniformity across unrelated domains, we argue that such outputs may challenge the authenticity, epistemic value, and editorial function of scientific correspondence. We do not present empirical proof of misconduct, but rather outline a conceptual framework for understanding this emerging risk and propose proportionate editorial safeguards, including cross-domain pattern detection and contextual assessment of authorship coherence. As AI lowers the threshold for generating domain-plausible commentary at scale, scientific publishing must adapt its integrity frameworks accordingly. In this context, vigilance toward synthetic authorship may become an essential component of editorial responsibility and post-publication quality control.
The arrival of generative AI (genAI) marks a water-shed moment for academic publishing, offering powerful assistive capabilities while posing novel risks to academic integrity that jeopardize scientific credibility (Thorp 2023). Research integrity sleuths have created online platforms, including the Problematic Paper Screener, Academ-AI, and Retraction Watch, that have documented thousands of examples of publications that used undisclosed genAI (Cabanac et al. 2022; Conroy 2023; Glynn 2025). These findings are particularly sobering when we consider that these numbers reflect published manuscripts and peer reviews, not problematic submissions identified during the review process.
Writing a scientific autobiography is challenging due to the many factors influencing an academic career, including personal experiences, the era of one's career, research interests, available research tools, institutional environment, societal priorities, and mentorship. Despite changing contexts, certain obligations remain constant: scholars have an obligation to prioritize scientific and personal integrity above the pursuit of individual success, and they must ethically use research tools, including information technology search processes and writing technologies, as part of their scientific processes. The first obligation is built on personal integrity, which in turn characterizes how the second is implemented. My career spanned the last quarter of the 20th century through the first quarter of the 21st century. During this period, there were significant advancements in research methodologies, including the capability of internet search technologies to access scientific databases and desktop writing and referencing software. These advances pale in comparison to the meteoric rise of artificial intelligence large language models (AI LLMs) over the last 5 years, where information retrieval and writing tools have transformed how we access and use information - potentially placing our personal and authorial integrity at risk. Given this rapid transformation, my autobiography's objectives are: 1) to share my developmental perspectives on personal and authorial integrity and examine the impact of early experiences where perceived plagiarism helped me define those principles; 2) to review professional guidelines concerning research integrity and policy recommendations for AI LLMs; 3) to discuss changes in my scholarly content resulting from evolving search strategies, writing tools, and journal growth; and 4) by using examples of AI-generated and AI-assisted writing samples, to address the influence of AI LLMs on authorial integrity, including practical risks, opportunities, and current recommended strategies for managing AI LLMs in scholarly writing. By sharing my perspectives, I hope to provide guidance for those pursuing scholarly careers that ensures their authorial integrity.
This article presents a Delphi consensus developed by a panel of editors-in-chief of anaesthesiology and pain medicine journals to guide the responsible use of large language models (LLMs) in academic publishing. LLMs offer potential benefits for scientific writing, including language editing, summarisation, translation, information organisation, and support for non-native English speakers, but their misuse raises concerns about accuracy, transparency, confidentiality, and research integrity. Through a three-round modified Delphi process involving 53 editors-in-chief or their delegates, 59 statements were generated and categorised into guidance for authors, editors, reviewers, and publishers with a particular attention to LLM disclosure practices and perceived risks. The consensus recognises that LLMs are useful tools in academic publishing for authors, reviewers, and editors. However, their use must be guided by ethics, legality, and principles of transparency and accountability. LLMs may assist with limited editorial and authorial tasks provided that their use is fully disclosed and all outputs are verified by humans. The consensus also emphasises the inappropriateness of using LLMs to generate original or ideative content, which should remain a strictly human responsibility. Moreover, LLMs must not generate data, references, conclusions, or entire manuscripts, nor be used for editorial decisions or peer-review reports. Editors expressed concerns about 'hallucinations', erosion of critical skills, confidentiality breaches, and the proliferation of low-quality LLM-generated manuscripts. The resulting guidance highlights transparency, human accountability, and careful verification as essential principles for integrating LLMs into scholarly workflows while preserving the integrity of scientific publishing.
Introduction The exponential increase in systematic reviews (SRs), accelerated by LLM-based generative AI and non-LLM automation tools, risks redundancy, overlap, and research waste. However, there is limited empirical evidence on how SRs that disclose AI use apply and report these tools in practice, including the extent of transparency and validation. Aim To assess the methodological and reporting features of SRs that explicitly acknowledge LLM-based and non-LLM automation tools’ use in a dedicated statement, and to examine how these features relate to bibliometric characteristics of these SRs. Methods An exploratory, cross-sectional, meta-research study with individual SRs as the unit of analysis. A random sample was drawn from a purposively defined stratum, comprising only SRs with designated AI statements. Screening was conducted by a single researcher; data extraction was performed by one researcher and independently verified by four others. Descriptive analyses were supplemented by Wilcoxon rank-sum tests, Spearman’s ρ, and χ² tests. Results We included 188 SRs; 75% reported using LLMs, and in 92% of studies LLM-based and non-LLM automation tools were used for manuscript writing. Reviews with designated AI statements were predominantly published in Elsevier or Elsevier-supported journals (70.2%). Only 42% referenced a pre-registered protocol; the median time from protocol registration to first journal submission was 267 days. Reviews with more included studies were published in higher-impact journals (ρ = 0.34, p < 0.0001), as were reviews led by authors affiliated with high-income countries (W = 1931.5, p < 0.0001). Reviews with more authors were more likely to have a pre-registered protocol (χ² = 20.54, p < 0.0001), and pre-registered reviews more often adhered to a reporting checklist (χ² = 8.93, p = 0.0027). Conclusions LLM-based and non-LLM automation tools were used predominantly for writing. Sharing of prompts and human-validation procedures was insufficient, and many reviews exhibited methodological and reporting weaknesses. Clearer guidance is needed to support transparent, rigorous use of LLM-based and non-LLM automation tools in SRs. Supplementary Information The online version contains supplementary material available at 10.1186/s12874-026-02796-2.
Abstract Background As AI-use becomes more common in research, disclosure policies have emerged to ensure transparency and appropriateness. However, database research in other fields suggests that disclosure may lag behind AI-use. Medical education journal editors report that submitted manuscripts rarely include AI-use disclosures, and they perceive a lack of clarity regarding when and how AI-use should be disclosed. However, we lack objective evidence regarding the incidence and nature of AI-use disclosure in medical education. Methods Using bibliometric methods, we searched a database of 24 leading medical education journals for articles published between January and July 2025 (n=2,762 articles). Screening with Covidence software excluded 716 non-empirical and/or non-English language articles. The remainder (n=2,046) were examined for the presence of AI-use disclosures, which were content-analyzed. Results 2.5% of empirical articles (n=51) had an AI disclosure statement. BMC Medical Education contained the most disclosures (24), followed by Medical Teacher (7) and Journal of Surgical Education (4). Forty-two articles were authored in non-native English-speaking countries, and 69.4% of all first authors had begun publishing in the past decade. Disclosures averaged 43 words and described use superficially: most commonly “editing” and “translation”. Of 18 named tools, ChatGPT was most common. Most disclosures explicitly attested to author responsibility for AI-produced material. Disclosures usually appeared in acknowledgements; those located in methods lacked responsibility attestation. Negative disclosures attesting that AI was not used were also present. Discussion AI-use disclosures in medical education journals are rare and appear mostly in work from non-native English-speaking regions of the world. A shared disclosure practice is evident: name the tool and affirm author responsibility, but describe use superficially. This suggests a practice of “safe” disclosure that may be more performative than informative, therefore failing to satisfy the goal of ensuring transparent and ethical AI use in research.
Large language models are becoming ubiquitous in the editing and generation of written content and are actively being explored for their use in medical education. The use of artificial intelligence (AI) engines to generate content in academic spaces is controversial and has been meet with swift responses and guidance from academic journals and publishers regarding the appropriate use or disclosure of use of AI engines in professional writing. To date, there is no guidance to applicants of graduate medical education programs in using AI engines to generate application content—primarily personal statements and letters of recommendation. In this Perspective, we review perceptions of using AI to generate application content, considerations for the impact of AI in holistic application review, ethical challenges regarding plagiarism, and AI text classifiers. Finally, included are recommendations to the graduate medical education community to provide guidance on use of AI engines in applications to maintain the integrity of the application process in graduate medical education.
Background Citation practices are fundamental to teaching scholarly writing. With the emergence of generative Artificial Intelligence (AI) technologies, students need a structured way to cite when and how these technologies are used. Objective This paper introduces an instructor resource, an AI Contribution Statement, which provides students with an ethical and explicit framework for reporting on AI use during idea generation and writing in research methods. Method Students were guided to create an AI Contribution Statement that reports when an AI technology was used for a research paper, what prompts were given and text generated, and how the information was incorporated into a final written product. Results Sixty-four percent of students reported using AI assistive technologies. Of those, 33.12% reported using it more than twice, suggesting that, when allowed in a course, students’ use is relatively low. Conclusion Training students in best citation practices regarding ethical and transparent use of AI technologies is important, yet additional research is needed to understand how students are using it and how instructors can leverage this tool to foster equity. Teaching Implications An AI Contribution Statement is an important addition to research methods teaching to create equality in technology use and student success.
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
… Generally, AI used to improve spelling, grammar or manuscript structure does not need to be listed, but we recommend authors check the journal's specific policy to ensure adherence. …
While generative artificial intelligence (AI) technology has become increasingly competitive since OpenAI introduced ChatGPT, its widespread use poses significant ethical challenges in research. Excessive reliance on tools like ChatGPT may intensify ethical concerns in scholarly articles. Therefore, this article aims to provide a comprehensive narrative review of the ethical issues associated with using AI in academic writing and to inform researchers of current trends. Our methodology involved a detailed examination of literature on ChatGPT and related research trends. We conducted searches in major databases to identify additional relevant articles and cited literature, from which we collected and analyzed papers. We identified major issues from the literature, categorized into problems faced by authors using nonacademic AI platforms in writing and challenges related to the detection and acceptance of AI-generated content by reviewers and editors. We explored eight specific ethical problems highlighted by authors and reviewers and conducted a thorough review of five key topics in research ethics. Given that nonacademic AI platforms like ChatGPT often do not disclose their training data sources, there is a substantial risk of unattributed content and plagiarism. Therefore, researchers must verify the accuracy and authenticity of AI-generated content before incorporating it into their article, ensuring adherence to principles of research integrity and ethics, including avoidance of fabrication, falsification, and plagiarism.
BACKGROUND A personal statement is a common requirement in medical residency and fellowship applications. Generative artificial intelligence may be used to create a personal statement for these applications. METHODS Two personal statements were created using OpenAI's Chat Generative Pre-trained Transformer (ChatGPT) and two applicant-written statements were collected. A survey was sent to obstetric anesthesia fellowship program directors in the United States to assess the perceived readability, authenticity, and originality of the four personal statements. In addition, the survey assessed perceptions of applicants who use artificial intelligence to write a personal statement, including their integrity, work ethic, reliability, intelligence, and English proficiency. RESULTS Surveyed fellowship directors could not accurately discern whether statements were applicant-written or artificial intelligence-generated. The artificial intelligence-generated personal statements were rated as more readable and original than the applicant-written statements. Most program directors were moderately or extremely concerned about the applicant's integrity, work ethic, and reliability if they suspected the applicant utilized ChatGPT. CONCLUSIONS Program directors could not accurately discern if the statements were written by a person or artificial intelligence and would have concerns about an applicant suspected of using artificial intelligence. Medical training programs may benefit from outlining their expectations regarding applicants' use of artificial intelligence.
… and communicate this protocol to all relevant regulatory review bodies. Artificial intelligence [AI] is becoming more popular for writing and editing manuscripts, owing to its ease of use …
… to this statement as artificial intelligence technologies continue to … guide our decisions on manuscripts submitted to our journal. … For example, if a literature review is submitted, we would …
… In this paper, we use the term GAI to refer to systems capable of … they do not misattribute AI-generated statements as their own, … Following submission, we received peer-review feedback …
Objectives Generative artificial intelligence (GAI) tools can enhance the quality and efficiency of medical research, but their improper use may result in plagiarism, academic fraud and unreliable findings. Transparent reporting of GAI use is essential, yet existing guidelines from journals and institutions are inconsistent, with no standardised principles. Design and setting International online Delphi study. Participants International experts in medicine and artificial intelligence. Main outcome measures The primary outcome measure is the consensus level of the Delphi expert panel on the items of inclusion criteria for GAMER (Rreporting guideline for the use of Generative Artificial intelligence tools in MEdical Research). Results The development process included a scoping review, two Delphi rounds and virtual meetings. 51 experts from 26 countries participated in the process (44 in the Delphi survey). The final checklist comprises nine reporting items: general declaration, GAI tool specifications, prompting techniques, tool’s role in the study, declaration of new GAI model(s) developed, artificial intelligence-assisted sections in the manuscript, content verification, data privacy and impact on conclusions. Conclusion GAMER provides universal and standardised guideline for GAI use in medical research, ensuring transparency, integrity and quality.
The integration of generative artificial intelligence (AI) into academic research writing has revolutionized the field, offering powerful tools like ChatGPT and Bard to aid researchers in content generation and idea enhancement. We explore the current state of transparency regarding generative AI use in nursing academic research journals, emphasizing the need for explicitly declaring the use of generative AI by authors in the manuscript. Out of 125 nursing studies journals, 37.6% required explicit statements about generative AI use in their authors' guidelines. No significant differences in impact factors or journal categories were found between journals with and without such requirement. A similar evaluation of medicine, general and internal journals showed a lower percentage (14.5%) including the information about generative AI usage. Declaring generative AI tool usage is crucial for maintaining the transparency and credibility in academic writing. Additionally, extending the requirement for AI usage declarations to journal reviewers can enhance the quality of peer review and combat predatory journals in the academic publishing landscape. Our study highlights the need for active participation from nursing researchers in discussions surrounding standardization of generative AI declaration in academic research writing.
Background: When properly utilized, artificial intelligence generated content (AIGC) may improve virtually every aspect of research, from data gathering to synthesis. Nevertheless, when used inappropriately, the use of AIGC may lead to the dissemination of inaccurate information and introduce potential ethical concerns. Research Design: Cross-sectional. Study Sample: 65 top surgical journals. Data Collection: Each journals submission guidelines and portal was queried for guidelines regarding AIGC use. Results: We found that, in July 2023, 60% of the top 65 surgical journals had introduced guidelines for use, with more surgical journals (68%) introducing guidelines than surgical subspecialty journals (52.5%), including otolaryngology (40%). Furthermore, of the 39 with guidelines, only 69.2% gave specific use guidelines. No included journal, at the time of analysis, explicitly disallowed AIGC use. Conclusions: Altogether, this data suggests that while many journals have quickly reacted to AIGC usage, the quality of such guidelines is still variable. This should be pre-emptively addressed within academia.
The rapid diffusion of generative artificial intelligence (GenAI) tools—especially large language models (LLMs)—is reshaping scholarly publishing worldwide. While these tools can support language editing, translation, and workflow efficiency, they also raise integrity risks, including fabricated citations, unverifiable claims, undisclosed ghostwriting, confidentiality breaches in peer review, and contested ownership of AI-assisted outputs. Vietnam’s journal ecosystem is currently navigating internationalization pressures (e.g., indexing and visibility goals) alongside uneven editorial capacity and fragmented policy infrastructure, making it a critical setting for examining responsible governance of AI-generated content (AIGC). This study reports an exploratory policy-and-practice mapping across five Vietnam-affiliated publishing contexts (university-based open access journals, an internationally co-published journal, a defense-related journal, and law/social-science publishing). Using structured qualitative content analysis, we identify shared norms (e.g., “AI cannot be an author,” accountability remains human) but also substantial variation in disclosure requirements, treatment of AI-generated images and references, restrictions on reviewer use of AI tools, and clarity of enforcement mechanisms. Building on these findings and international literature, we propose a Vietnam-tailored governance framework that combines (i) risk-tiered allowable uses, (ii) mandatory disclosure and provenance documentation, (iii) human-in-the-loop editorial controls, and (iv) capacity-building measures aligned with open science principles. The paper contributes practical templates (disclosure language, policy clauses, and a workflow-integrated checklist) to support journals, editors, and research institutions seeking credible, implementable AI governance.
… jurisdictions for encoding rules about AI-generated content, and we recommend policymakers … such rules. In relation to existing rules: the EU’s AI Act does in fact envisage a ‘disclosure …
… to help audiences identify AI-generated misinformation [… laws requiring content creators to disclose AIGC. For example, China mandates that service providers add markings to content …
… Not all of these suggestions or ideas were used, and all AI-generated content was reviewed and … Disclosure rules need to be explicit and dynamic. Attending to these thresholds and the …
The digitization of research has transformed how evidence is gathered, hypotheses are generated, and manuscripts are written, introducing ethical challenges related to plagiarism, authorship, and artificial intelligence (AI) in medical writing. This structured narrative review, informed by a comprehensive database search (PubMed/MEDLINE, EMBASE, Scopus, and Web of Science; January 2005-March 2026), examines contemporary approaches to prevent plagiarism and ensure the ethical use of AI in medical publishing. While a systematic search of PubMed, EMBASE, Scopus, and Web of Science was applied, the objective was conceptual synthesis rather than quantitative meta-analysis. AI can enhance efficiency and quality, its misuse, through unacknowledged use, over-reliance, or biased outputs, poses a threat to scholarly integrity. Safeguarding trust in medical literature requires a proactive framework that combines plagiarism detection, mandatory AI disclosure, ethical training, and strict editorial oversight. Ultimately, technology may support, but cannot replace, the accountability and ethical responsibility of human authors.
The rapid proliferation of artificial intelligence (AI) tools has posed new challenges for academic integrity in higher education. The use of large language models (LLMs), machine translation, and paraphrasing tools complicates the identification of student-authored texts and raises urgent questions regarding authorship attribution in academic foreign language learning. This paper examines the limitations of existing AI text detection tools and presents the results of a pilot experiment involving English as a Foreign Language (EFL) student essays tested against two widely used detectors, ZeroGPT and GPTZero. The study demonstrates that while both detectors achieve high group-level separability (AUC $\approx$ 0.94), their reliability at the level of individual assignments is unacceptably low, with error rates of up to 29%. These findings suggest that detectors may be useful as supplementary indicators but cannot serve as independent proof of academic dishonesty. The article concludes by arguing for processoriented approaches in assessment, emphasizing documentation of the writing process, oral defenses, and student reflection on AI use.
… investigates authorship verification (AV) techniques to quantify AI … authorship problems from 506 students. Next, we developed an adapted Feature Vector Difference (FVD) authorship …
This article examines the institutional, technical, and ontological consequences of plagiarism detection platforms on contemporary academic writing. It argues that services like Turnitin and Grammarly do not simply detect plagiarism but actively reconfigure what counts as authorship, originality, and intellectual integrity. Drawing on historical and media-theoretical perspectives, the article traces plagiarism’s emergence as a culturally contingent offense, showing how its definition has shifted from oral cultures all the way to algorithmically processed patterns. In this transformation, plagiarism detection becomes less a neutral evaluative tool than a form of infrastructural governance that translates interpretive judgment into machinic legibility. Drawing on theorists such as Foucault, Chun, Kittler, Flusser, and Byung-Chul Han, the article situates plagiarism detection within a broader shift from expressive authorship to operational formatting. The article argues that plagiarism detection tools function as psychopolitical instruments, reshaping academic labor through anticipatory compliance and rendering originality as machinic compatibility.
The emergence of generative artificial intelligence (AI) technologies, such as large language models (LLMs) like ChatGPT, has precipitated a paradigm shift in the realms of academic writing, plagiarism, and intellectual property. This article explores the evolving landscape of English composition courses, traditionally designed to develop critical thinking through writing. As AI becomes increasingly integrated into the academic sphere, it necessitates a reevaluation of originality in writing, the purpose of learning research and writing, and the frameworks governing intellectual property (IP) and plagiarism. The paper commences with a statistical analysis contrasting the actual use of LLMs in academic dishonesty with educator perceptions. It then examines the repercussions of AI-enabled content proliferation, referencing the limitation of three books self-published per day in September 2023 by Amazon due to a suspected influx of AI-generated material. The discourse extends to the potential of AI in accelerating research akin to the contributions of digital humanities and computational linguistics, highlighting its accessibility to the general public. The article further delves into the implications of AI on pedagogical approaches to research and writing, contemplating its impact on communication and critical thinking skills, while also considering its role in bridging the digital divide and socio-economic disparities. Finally, it proposes revisions to writing curricula, adapting to the transformative influence of AI in academic contexts.
合并后的统一分组将“生成式AI在学术写作中的规范约束与应用边界”相关研究按八条并列主线组织:从学术诚信与伦理失范机理(含检测困境)、作者性与责任可问责框架、以及期刊/出版政策与标准化披露清单(阈值与粒度)出发,进一步界定AI可用范围的技术/制度边界(准确性、隐私、可追溯、人类在环);同时通过教育与评估重构实现规范内化,并在特定任务与学科中提出场景化正当使用管线;最后以制度演进与实践测绘解释政策落地存在差距且需要持续迭代。整体指向:技术能力只是底座,真正的合规与可信赖依赖人类可控性、可核查的透明披露与严格的编辑/作者责任治理。