Artificial Intelligence Policy

Artificial Intelligence (AI) Policy

Journal: Media for Empowerment, Mobilization, and Innovation in Research & Community (MEMOIRS-C) · Version 1.0 · Effective: 15 October 2025
Original Article Review Short Communication (Community Outcomes)

1) Purpose & Scope

This policy governs the transparent, ethical, and accountable use of Artificial Intelligence (AI) and AI-assisted tools across all submissions to MEMOIR&C—Original Articles, Review Papers, and Short Communications that document community-based outcomes. AI may aid clarity, organization, or workflow; however, full intellectual, ethical, and scholarly responsibility for the manuscript resides entirely with the human authors.

Guiding Principle: AI may assist communication; it cannot assume authorship, academic judgment, or moral accountability.

2) Definitions

AI-assisted technologies include systems capable of generating, refining, translating, summarizing, or transforming text, images, code, data, audio, or video (e.g., large language models, code assistants, image synthesis engines). MEMOIR&C distinguishes between tools used solely for linguistic refinement and those used for content generation, analysis, or media production.

3) Authorship & Accountability

  • AI systems are not authors. Only humans who meet authorship criteria may be listed as authors. AI cannot provide consent or accept responsibility.
  • Human responsibility is non-delegable. Listed authors are accountable for all content—concepts, data, analyses, citations, and conclusions—including any AI-assisted passages.

4) Mandatory Disclosure

At submission, authors must disclose any use of AI tools—including language editing, ideation, drafting, coding, analysis, image/media generation, translation, transcription, anonymization, or data extraction. The disclosure must appear under the subsection “Declaration of AI Use” placed before the References.

Include: tool/provider and model/version; purpose of use; a concise description of inputs (no confidential material); human validation steps; and, where applicable, links to reproducible materials.

5) Acceptable Uses (with safeguards)

  1. Language polishing/translation with thorough human review and approval.
  2. Coding or computational assistance (e.g., boilerplate scripts), validated by the authors; share reproducible code/data when applicable.
  3. Exploratory aids (e.g., clustering or summarization) interpreted by humans, with clear audit trails.
Authors must ensure accuracy, originality, legal/ethical compliance, and full disclosure of AI use.

6) Prohibited or Restricted Uses

  • Listing AI systems as authors or failing to disclose substantive AI use.
  • Fabrication/hallucination of content (e.g., invented references, data, approvals).
  • Uploading confidential manuscripts or sensitive data to public AI services without enforceable confidentiality and legal compliance.
  • Generative images/video are not accepted by default; rare exceptions require documented rights/provenance and explicit editorial approval.
Non-compliance: May lead to revision requests, rejection, retraction, or institutional notifications, consistent with COPE workflows.

7) Data, Privacy & Community Safeguards

For community-involving research, authors must not expose personal identifiers, health data, geolocation, or culturally sensitive materials to AI tools lacking enforceable privacy protections. Obtain consent for AI-based processing whenever individuals or communities might be identifiable. Anonymization steps must be documented and verified by humans.

8) Short Communications (Community Outcomes)

Where AI supports community programs (e.g., translation of public materials, survey summarization), authors must disclose the tool, purpose, inputs, and expert oversight; confirm cultural appropriateness and factual accuracy; and ensure no infringement of community IP/Traditional Knowledge or personal-data leakage.

9) Peer Review & Editorial Use of AI

  • Reviewers must not upload manuscripts or reports to public generative AI tools. Confidentiality is paramount. Institutionally controlled utilities may be used for limited grammar/reference checks; reviewers retain full responsibility.
  • Editors may use vetted, privacy-preserving tools for integrity checks (e.g., plagiarism screening, image manipulation, statistical red flags). Editorial judgments remain human decisions.

10) Documentation & Reproducibility

Authors should maintain records of the tool/provider, model/version/date, key settings or prompts, datasets provided to the tool, and human validation steps. These records must be supplied upon editorial request and, where appropriate, accompanied by reproducible materials.

11) Compliance & Consequences

The journal may conduct screening for undisclosed AI use, fabricated references, or manipulated media. Non-compliance may result in editorial action, including requests for revision, rejection, retraction, or notification to institutions/funders, in line with standard ethical procedures.

12) Copy-Ready “Declaration of AI Use” Templates

A) Language editing / translation

B) Content assistance (ideas, drafting, summarisation)

C) Code/data/statistical assistance

D) Image/media generation (exceptional)

E) Community programmes (Short Communication)

Queries regarding this policy may be directed to the Editorial Office via the journal homepage: memoirs-c.org.