Full text loading...
Abstract
The rapid integration of Generative Artificial Intelligence into scholarly publishing presents transformative potential and ethical challenges. This study examines how academic institutions and journals address these challenges, with a focus on key areas such as authorship, peer review, early-career researcher development, and governance policies. Employing a qualitative research design, the study draws on documentary analysis from 42 academic institutions and 15 scholarly journals, supplemented by semi-structured interviews with 24 stakeholders, including editors, research ethics officers, and researchers from disciplines and regions. Findings reveal a fragmented and evolving regulatory landscape marked by inconsistent institutional policies, limited editorial transparency, and uncertainty regarding the ethical use of GenAI. Key concerns include unclear authorship attribution, potential for fabricated citations, and erosion of scholarly voice, particularly affecting early-career and multilingual researchers. While many participants acknowledged the advantages of GenAI in enhancing writing support and language accessibility, they also emphasised the importance of safeguards to uphold academic integrity. The study highlights the need for tiered AI disclosure requirements, integration of AI ethics into research training, and international policy alignment through organisations such as COPE and UNESCO. Responsible governance of GenAI requires coordinated efforts across institutions, journals, and educational frameworks to ensure ethical and inclusive scholarly communication.