Browse our ethics policies:

AI Usage Animals in research Attribution Authorship Citation manipulation Clinical trials Conflicts of interest Defamation/libel Editorial independence Fabricated data Appealing Image manipulation Informed consent Plagiarism Redundant publication Simultaneous submissions
 

AI usage policy

This policy outlines the acceptable use of artificial intelligence (AI) by authors, editorial teams, and peer reviewers in the preparation or evaluation of work submitted to the journal. Specific use cases for this technology are detailed below; any inquiries should be directed to the journal's editorial office.

The use of generative AI tools or technologies for copywriting—that is, creating, drafting, or composing any part of a submission—is strictly prohibited. However, the use of generative AI tools for copyediting—including correcting, editing, formatting, modifying, or refining an author's original work to enhance structural coherence, linguistic clarity, and grammatical accuracy—is permitted, provided that the following overarching principles are observed.

Overarching Principles for AI Usage

  • Authors and peer reviewers bear full responsibility and accountability for the accuracy and integrity of their work.
  • AI tools and technologies must be employed in a responsible and transparent manner.
  • AI tools should serve as a supplement to, rather than a substitute for, human judgment and involvement in the publication process.

The journal acknowledges the increasing role of AI technologies in academic content development and recognizes that authors may wish to utilize these tools during manuscript preparation. Responsible and productive use of AI requires human oversight, authorship, creativity, and expertise, ensuring that the resulting content remains original and consistent with professional ethical and publishing standards. The application of AI tools must comply with privacy and confidentiality obligations, data protection regulations, intellectual property rights, copyright and licensing requirements, and editorial standards, while promoting transparency for readers. Authors remain fully accountable for all sources used and for the integrity of the submitted work.

Declaration of AI Usage

Authors must clearly and transparently disclose any use of AI tools, including Large Language Models (LLMs), for any permitted purpose in JSDIO publications. Such disclosures must be made within the designated sections of the manuscript (e.g., the 'Declaration of AI Usage' section). Authors must specify the nature of the content created or modified and provide the name and version of the AI tool employed. Additionally, any external works utilized by the AI tool must be properly cited and referenced, with all necessary permissions for reproduction secured. Failure to provide this information may result in requests for clarification during submission or peer review, or post-publication inquiries, and could lead to rejection of the manuscript or corrective action after publication. Authors are required to adhere to the journal's principles governing the use of generative AI and to review the terms and conditions associated with the AI tools employed to ensure compliance.

Authors assume full responsibility for the accuracy of all submitted content, including citations and references, and must verify that the material is correct, appropriately aligned with the research, and consistent with the journal's research and publishing ethics. Standard tools that are used for spelling and grammar correction, which do not employ generative AI, fall outside the scope of this policy. The journal reserves the right to determine the permissibility of AI tool usage in submitted work and retains the authority to reject submissions or take appropriate post-publication action in cases where fabricated or fraudulent AI-generated content is identified.

Key Principles for the Use of Generative AI

To ensure that AI tools and technologies are utilized in an accountable, responsible, and transparent manner during the preparation of manuscripts for submission, the journal emphasizes the following core principles. These principles are designed to guarantee appropriate human oversight throughout the writing process and compliance with the journal's contractual warranties, including assurances that all submitted material is original, unpublished, and that permissions have been secured for any third-party content.

AI and Authorship

In alignment with COPE's position statement on the use of AI tools, Large Language Models (LLMs) cannot be credited with authorship. This is because they lack legal standing, cannot assign copyright, are incapable of independently conceptualizing research design without human direction, and cannot assume accountability for the integrity, originality, or validity of the published work.

AI and Content Creation

  • The use of generative AI tools or LLMs for copywriting—including drafting any part of a submission such as the abstract or literature review—is strictly prohibited. However, in accordance with standard academic practice, the journal permits the inclusion of examples of generative AI outputs for illustrative purposes within scholarly critique or discussion. Such examples must be clearly identified in the text and fully cited and referenced in compliance with the journal's formatting requirements.
  • The generation, manipulation, or reporting of research data and results using generative AI tools or LLMs is not permitted.
  • The use of generative AI tools or LLMs for in-text statistical reporting is prohibited due to concerns regarding the authenticity, integrity, and validity of the data produced. However, the use of such tools to assist in data analysis is permissible, provided that this is declared transparently and appropriately.
  • The submission or publication of images created using AI tools or large-scale generative models is subject to their intended purpose and compliance with applicable rights requirements. Such usage must not violate the journal's plagiarism policy. The following AI-generated images are permitted, subject to prior consultation with the editorial office: explanatory diagrams, instructional illustrations, conceptual visualizations, and process flow diagrams.
    • These images must be accurate, must not misrepresent information, and must be clearly labeled as AI-generated in accordance with the journal's attribution policy, including the name and version of the tool used.
    • Artistic renderings, cover art, design images, or graphical abstracts created by generative AI are not permitted. Similarly, factual or evidential images—such as those supporting scientific or technical claims, including experimental data and research results—are prohibited unless sourced from third parties with appropriate permissions and attribution.
    • Any modifications to images or figures using generative AI tools must comply with the journal's policy on image manipulation.

AI and Content Editing

The use of generative AI tools or LLMs for copyediting—to enhance language quality and readability of a submission or peer review report—is permitted. This practice aligns with the use of conventional tools for spelling and grammar correction, as it involves refining existing author-created material rather than generating new content. Authors and peer reviewers remain fully responsible for the original work and must exercise caution to avoid bias, fabrication, misinformation, inaccurate attribution, or plagiarism. All content must be verified prior to submission. Authors should maintain documentation of any AI tools employed for this purpose, and such tools must not be used to replicate the unique work of others.

For clarity, the journal defines copyediting as the modification of existing author-created material to improve language, grammar, and spelling, whereas copywriting refers to the creation of new material. This distinction reflects the guidance provided in STM's Recommendations for a Classification of AI Use in Academic Manuscript Preparation.

Authors and peer reviewers bear full responsibility for any work submitted to the journal and remain accountable for the accuracy and integrity of all AI usage.

AI Evaluation and Peer Review

The journal adheres to the following fundamental principles regarding the use of artificial intelligence (AI) by editorial teams and peer reviewers:

  1. Under no circumstances should any manuscript or associated files submitted to the journal for consideration or review be uploaded to a generative AI tool or Large Language Model (LLM).
  2. Reviewers may utilize a generative AI tool exclusively for the purpose of copy-editing their review to enhance linguistic clarity and readability. In such cases, reviewers retain full responsibility for the accuracy and integrity of the review and must disclose this usage transparently to the editorial team.

All materials submitted for review must be treated as strictly confidential. Sharing such content with third parties or uploading it to a generative AI tool or LLM for assessment or evaluation constitutes a breach of author confidentiality and may infringe upon proprietary rights and data privacy obligations.

The use of generative AI tools in the peer review process raises additional concerns, including inherent biases within model training datasets and the unreliability of AI-generated assessments, which may result in false, flawed, or inaccurate evaluations. To preserve trust in the integrity of the scholarly record, the journal prohibits the use of generative AI tools or LLMs in any aspect of the review, evaluation, or decision-making process for manuscripts by either editorial team members or reviewers, in accordance with the journal’s principles of peer review. Consequently, any files under review should not be uploaded to a generative AI tool or LLM.

The journal does, however, permit the use of generative AI tools solely for copy-editing peer review reports to improve language quality. Reviewers remain fully accountable for the accuracy and integrity of their evaluations and must clearly and transparently declare any such usage.

Peer reviewers bear ultimate responsibility for the rigor, validity, and accuracy of their reviews. As emphasized in COPE's position statement on the use of AI tools, these essential qualities cannot be replicated by non-human generative AI systems. Any violation of the principles outlined above will be regarded as misconduct in the peer review process.

AI Usage Reference Table

The following table serves as a reference for acceptable and prohibited AI use cases in relation to submissions. This list is not exhaustive and is based on STM's Recommendations for a Classification of AI Use in Academic Manuscript Preparation. For additional guidance, please refer to the overarching principles outlined above. Any questions regarding specific use cases not addressed herein should be directed to the appropriate editorial contact.

AI Usage Description Permitted?
Abstract Creation Composing, drafting, or writing any portion of the abstract using author-provided prompts in a generative AI tools or LLMs, including expanding text or generating machine-produced summaries of prior work. NO
Abstract Copyediting Refining, correcting, editing, or formatting the author's original abstract using a generative AI tool or LLMs to enhance linguistic clarity and grammatical accuracy. YES
Hypothesis Creation Generating, drafting, or writing any part of the hypothesis or research questions through author-inputted prompts into a generative AI tool or LLMs. NO
Introduction Creation Composing, drafting, or writing any portion of the introduction using a generative AI tool or LLMs, including expanding text or producing machine-generated summaries of prior work. NO
Introduction Copyediting Improving the author's original introduction by refining, correcting, editing, or formatting it through a generative AI tool or LLMs to enhance clarity and grammar. YES
Methodology Ideation Using a generative AI tool or LLM to identify methodological approaches or viable models for an initial research proposal, similar to a traditional search engine or literature scan. YES
Methodology Creation Drafting or writing any part of the methodology using a generative AI tool or LLMs, including expanding text or generating summaries of prior work. NO
Methodology Copyediting Refining, correcting, editing, or formatting the author's original methodology using a generative AI tool or LLM to improve clarity and grammar. YES
Literature Review and Bibliography Ideation Employing a generative AI tool or LLM to identify relevant sources or gaps in the literature for an initial research proposal, or to assist in compiling a reference list. YES
Literature Review and Bibliography Creation Drafting or writing any part of the literature review or bibliography, or analyzing and synthesizing literature using a generative AI tool or LLM. NO
Literature Review and Bibliography Copyediting Refining, correcting, editing, or formatting the author's original literature review or bibliography using a generative AI tool or LLM. YES
Data Generation Creating or generating research data or results using author-inputted prompts in a generative AI tool or LLM. NO
Data Visualization Producing figures, tables, or infographics to visually represent results based on the author's pre-analyzed data using a generative AI tool or LLM, similar to traditional visualization tools. YES
Results Analysis Analyzing or interpreting data/results using a generative AI tool or LLM. NO
Results Summary Summarizing the author's original data/results using a generative AI tool or LLM to enhance accessibility and data presentation. YES
Analysis/Discussion Copyediting Refining, correcting, editing, or formatting the author's original analysis or discussion section using a generative AI tool or LLM. YES
Conclusion Creation Drafting or writing any part of the conclusion using a generative AI tool or LLM. NO
Conclusion Copyediting Refining, correcting, editing, or formatting the author's original conclusion using a generative AI tool or LLM. YES
Code Creation Generating code for research purposes exclusively through a generative AI tool or LLM without human involvement. NO
Code Copyediting Refining, correcting, editing, or formatting the author's original code using a generative AI tool or LLM to improve readability. YES
Artistic Rendering, Cover Art, Design Images, Graphical Abstracts Creating images using generative AI tools (e.g., DALL·E) for commercial purposes or to support scientific or technical claims, including text-to-image models for evidential or factual content. NO
Explanatory Diagrams, Teaching Illustrations, Conceptual Visualizations, Process Flow Diagrams Producing accurate visual representations of information using generative AI tools, provided the data can be attributed, verified, and validated for accuracy. YES
Methodological Figure Generation Generating, refining, correcting, editing, or formatting figures, tables, or infographics to visually represent theoretical concepts or methodologies based on the author's existing framework using a generative AI tool or LLM. YES
Translation Translating the author's original, unpublished work into English using a generative AI tool or LLM, provided the author declares such usage and verifies the accuracy and integrity of the content. YES
Presenting Content as Original Research Using generative AI tools to create data, text, images, graphs, spectra, or other content and presenting it as original research derived from non-machine sources. NO

The Journal’s Use of AI Tools and Technology

As a journal dedicated to accountability and transparency, we recognize the importance of employing AI tools and technologies in an ethical and responsible manner to support editorial and publishing workflows. Such use must not compromise the fundamental processes that underpin these workflows, nor diminish the integrity or quality of scholarly content.

The journal acknowledges its ethical responsibility to maintain trust in its processes and publications among authors, readers, and stakeholders. Accordingly, any application of AI tools and technologies by the journal will be disclosed transparently, as appropriate, and will comply with all relevant data protection and privacy regulations. Furthermore, the journal will take into account the potential for structural biases inherent in AI systems, as well as the broader environmental and societal implications of their use, and will actively work to mitigate any adverse effects on individuals and the planet.

Human oversight remains central to all AI applications within the journal's operations. All outputs generated by AI tools are subject to human review, and no decision will ever be made solely on the basis of algorithmic processes, thereby preventing the perpetuation of real-world biases and inequities.

The journal recognizes that the role of AI in scholarly publishing is rapidly evolving. Consequently, we will continue to monitor developments in this area to ensure that our practices, policies, and guidance remain aligned with industry best practices. This commitment ensures that our publications consistently uphold the highest standards of quality and trustworthiness, while maintaining transparency in our workflows for authors, readers, and the wider academic community.