Generative AI (GenAI) Policy – ACS Publisher
ACS Publisher recognizes the growing role of Generative Artificial Intelligence (GenAI) and AI-assisted tools in research and scholarly publishing. To maintain transparency, academic integrity, and ethical standards, the following policy applies to all journals published under ACS Publisher.
1. Permitted Use
Authors may use GenAI or AI-assisted tools only for language improvement, grammar correction, reference formatting, or clarity enhancement. These tools may assist in refining writing style but must not be used for creating original content, results, or analysis.
2. Prohibited Use
The use of GenAI for data fabrication, interpretation of results, creation of images or figures, or writing core research content is strictly prohibited.
AI systems cannot generate novel findings, conceptual contributions, or scientific arguments on behalf of the author.
3. Disclosure Requirement
Authors must explicitly disclose in their manuscript (Acknowledgment or Methodology section) if GenAI or AI tools were used during manuscript preparation.
Transparency regarding AI use is a mandatory ethical requirement under ACS Publisher policy.
4. Accountability
Authors bear full responsibility for the originality, validity, and accuracy of the content submitted to any ACS journal.
AI tools cannot be listed as authors or co-authors under any circumstances.
5. Editorial and Review Integrity
Editors and reviewers are also required to disclose any use of AI-assisted tools in the manuscript evaluation or peer-review process to ensure ethical and transparent editorial handling.
6. Reference Standards and Global Guidelines
ACS Publisher aligns with international best practices on GenAI use in scholarly communication, as defined by leading organizations and publishers.
The following resources guide our ethical framework:
-
Elsevier – The Use of Generative AI and AI-Assisted Technologies in the Review Process
https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-the-review-process -
Elsevier – Generative AI Policies for Journals
https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals -
STM – Recommendations for a Classification of AI Use in Academic Manuscript Preparation
https://stm-assoc.org/document/recommendations-for-a-classification-of-ai-use-in-academic-manuscript-preparation/ -
WAME – Chatbots, Generative AI, and Scholarly Manuscripts
https://wame.org/page3.php?id=106 -
WMA – Declaration of Helsinki (Ethical Principles for Medical Research Involving Human Subjects)
https://www.wma.net/policies-post/wma-declaration-of-helsinki/ -
EU Directive 2010/63/EU for Animal Experiments
https://eur-lex.europa.eu/eli/dir/2010/63/oj/eng -
International Committee of Medical Journal Editors (ICMJE) – Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work
https://www.icmje.org/recommendations/ -
Elsevier – Publishing Ethics Resource Kit (PERK)
https://www.elsevier.com/editor/perk


