Generative Artificial Intelligence (AI) Policy

Business Review and Case Studies (BRCS) implements policies aligned with Elsevier’s guidelines concerning the responsible use of generative AI tools in the preparation of manuscripts submitted for publication (https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals).

For Authors

  1. Authors may use generative AI or AI-assisted technologies to improve the readability, language quality, or formatting of their manuscripts.
    However, AI tools must not replace core scholarly tasks, such as formulating hypotheses, interpreting data, drawing conclusions, or generating novel scientific insights.

  2. Any use of generative AI must be conducted under active human oversight.
    Authors are fully responsible for verifying the accuracy, validity, and originality of all AI-assisted content.

  3. Authors are required to disclose any use of generative artificial intelligence (AI) tools in a dedicated Generative AI Statement within the manuscript.

    The disclosure should clearly identify the tool or service used, its purpose, and affirm the author’s responsibility for verifying and editing the output. A standard disclosure statement should be written as follows:

    During the preparation of this manuscript, the author(s) used [TOOL/SERVICE NAME] for [SPECIFIC PURPOSE]. After using this tool/service, the author(s) reviewed and revised the content as necessary and assume full responsibility for the accuracy and integrity of the published work.

    Example:

    During the preparation of this manuscript, the author(s) used ChatGPT (OpenAI, 2025) to improve grammar and clarity. After using this tool/service, the author(s) reviewed and revised the content as necessary and assume full responsibility for the accuracy and integrity of the published article.

  4. AI tools cannot be listed as authors or co-authors.
    AI tools cannot take responsibility for the content, integrity, or ethical implications of a scientific work.

  5. Authors must not use AI tools to generate or alter images, figures, or graphical data unless such use is part of the research methodology and is clearly described in the Methods section.
    In such cases, authors must specify the tool used, its version or parameters, and provide original unaltered data upon request.


For Reviewers

  1. Reviewers must not upload manuscripts or any part thereof into publicly accessible AI systems, as submitted manuscripts are confidential.

  2. Generative AI tools must not be used to evaluate or summarize scientific content or provide peer review judgments, since peer review requires human expertise and responsibility.

  3. Reviewers are accountable for the full content of their review reports.
    The use of non-generative AI tools (e.g., for grammar checking) is permissible, but generative AI must not be relied upon for analytical or evaluative content.


For Editors

  1. Editors must treat all submitted manuscripts as confidential and must not upload manuscripts or related content into AI tools for language or content processing unless confidentiality and copyright are fully protected.

  2. Editorial decisions must be made by human judgment.
    Generative AI may assist with technical checks (e.g., plagiarism detection, metadata review), but it must not replace editorial evaluation or decision-making.

  3. If editors identify potential violations of this AI policy, they must take appropriate action—such as requesting clarification, revisions, or rejecting the submission.