5 GenAI Use Guidelines
Responsible use of generative artificial intelligence (GenAI)
5.1 Timeline
This documented was updated on 06-January-2026. This document should be reviewed periodically, particularly when adopting new GenAI tools or when disciplinary norms evolve (while resisting harmful, shifting baselines).
5.2 Purpose
This document reviews principles, guard rails, and documentation practices for the responsible use of generative artificial intelligence (GenAI) tools in scientific research and analysis. One such GenAI tool that most of us are familiar with is ChatGPT (Generative Pre-trained Transformer), a GenAI model that can detect (and arguably understand) and generate human-like text by predicting what comes next in a sentence. Large Language Models (LLMs) are very advanced GenAI systems trained on absolutely massive amounts of text to answer complex questions, write in certain rhetorical tones, summarize information, or have conversations in natural language. Given this tool’s widespread utility, it can certainly be misused. Therefore, the guidelines below attempt to ensure that GenAI augments human reasoning without compromising scientific validity, reproducibility, or accountability. In all aspects of the present academic work, GenAI tools like ChatGPT are therefore treated as assistive tools –comparable to statistical software (like R) or calculators– not as autonomous analysts or skilled authors.
5.3 Core principles of responsible use of GenAI
Accountability: The human researcher retains full responsibility for –and is held accountable for– all analytical decisions, interpretations, text and code, and inference. The use of GenAI does not relieve any burdens of authorship, responsibility, or liability. Importantly, this means that the human researcher is allowed –and, in most cases, is encouraged– to use AI-produced code, as long as the research has vetted and error-checked. This likewise assumes that the researcher accepts responsibility for any and all errors arising from AI assistance. That is, GenAI may accelerate work, but it may not replace understanding. If a researcher cannot defend a decision without the GenAI present, the decision is invalid.
Scientific Primacy: Scientific reasoning precedes and constrains GenAI use. Hypotheses, data-generating assumptions, and model structures must be defined by the researcher. GenAI may clarify or critique these decisions but may not create them from scratch without human justification.
Transparency: All GenAI use must be documented in a way that allows another scientist to understand how GenAI influenced the work. GenAI-assisted reasoning must be distinguishable from original analysis.
Reproducibility: All results must be reproducible without access to GenAI tools. Data, code, and associated documentation must be sufficient to reproduce results independently. GenAI may assist development but must not be a hidden dependency.
Proportionality: The level of documentation naturally should be proportional to the influence of GenAI. Minor stylistic assistance requires minimal logging, while conceptual or analytical assistance requires a larger volume of explicit documentation and justification.
5.4 What is allowed, and what is not allowed?
Permitted Uses of GPTs in Scientific Work
Conceptual Clarification: GenAI may be used to explain statistical concepts, examine modeling assumptions, and interpret model diagnostics (at a broad level).
Planning and Reflection: GenAI may assist with refining research questions, generating assumption checklists, stress-testing interpretations, and identifying alternative explanations that are then vetted for ecological sanity by the human researcher.
Writing Support: GenAI may be used to improve clarity, organization, and tone, and to identify points of ambiguity or inferential overreach. AI absolutely must not invent methods, results, or citations.
Code Understanding: GenAI may explain what existing code does, diagnose warnings or errors conceptually, and suggest stylistic or reproducibility improvements. AI may not replace independent code comprehension.
Prohibited or Restricted Uses of GenAIin Scientific Work
Replacement of Scientific Judgment: GenAI must not be used to select models, error distributions, priors, or random-effects structures without independent human justification, nor to interpret results without verification.
Undocumented Analysis Generation: GenAI must not generate full end-to-end analytical pipelines without explanation, or produce black-box code whose logic is not understood by the researcher.
Fabrication: GenAI must not be used to invent data, methods, results, or citations, or to create post-hoc justifications unsupported by the analysis.
Outcome Optimization: GenAI must not be used to iteratively prompt for statistically significant results, improved AIC values, or exaggerated certainty.
5.5 GenAI Interaction Log
5.5.1 Purpose of the Log
Researchers should maintain an AI Interaction Log alongside their analysis notebook to document meaningful AI involvement in the research process. (This is a required component for students in ZOO/ECOL-5500 starting Spring 2026)
Required Information: Entries record the date, GenAI tool used, purpose of interaction, nature of assistance provided, key takeaway, and the verification step performed.
When Logging Is Required: Logging is required when GenAI influences model choice, interpretation, methodological justification, or scientific claims. Minor stylistic or grammatical use does not require detailed logging.
Verification Obligations: Any GenAI-suggested content must be independently verified using primary literature, software documentation, diagnostic checks, or independent reasoning. Unverified AI output must not appear in final analyses.
Authorship and Attribution: GenAI tools are not authors and do not receive citation credit. If required by journals or funders, GenAI use may be acknowledged in a neutral Methods or Acknowledgments statement.
Ethical Safeguards: GenAI use must not obscure uncertainty, inflate confidence, reduce methodological transparency, or disadvantage collaborators or students with limited access to AI tools.