Oct 29, 2025
A supervisor-focused playbook for doctoral AI policy, disclosure, privacy and assessment practice.
Written by: Alessandra Giugliano
Universities have adopted generative‑AI policies at breakneck speed, yet many supervisors still face uncertainty about how to apply these rules in doctoral programmes. Student use of AI tools is rising sharply—the Higher Education Policy Institute (HEPI) survey reports that 92% of students used at least one AI tool in 2025. Without clear, written expectations, students receive conflicting signals across courses and committees.
Supervisors must therefore craft comprehensive, documented guidelines covering disclosure, data privacy and assessment boundaries. This article, the third in thesify’s updated university AI policy series, distils common institutional rules into a practical playbook for professors and PhD supervisors. It builds on our university AI policy update report and our doctoral researcher guide to AI in 2025.
Why Supervisors Need Clear, Written AI Rules Now
Three pressures underscore the need for clarity:
Widespread usage – With AI adoption approaching ubiquity among students, uncertainty about acceptable uses breeds confusion. Without explicit guidance, some students may use AI for tasks that examiners consider misconduct.
Uneven enforcement – Policies vary widely by department. Columbia University’s provost states that AI use in assignments or exams is prohibited unless instructors explicitly permit it. UCLA’s Teaching & Learning Center offers sample policies ranging from outright prohibition to limited use with citation. Such divergence leaves students guessing.
Privacy and data risk – Harvard IT warns users not to input confidential or regulated information into generative AI tools, noting that outputs may be inaccurate and that users are responsible for published content. MSU’s guidelines and Leeds’s supervisor page echo this caution, advising the use of institution‑approved tools and recorded approvals.
Detection technologies cannot substitute for judgement. Turnitin emphasises that its AI writing indicator is a single data point; educators should not treat the percentage as a verdict. Instead, suspicion should prompt conversation and evidence gathering.
For more information about AI detection, check out our guide on how professors detect AI in academic writing.
The Policy Baseline: Disclosure, Privacy, and Assessment Scope
Institutions around the world converge on three baseline requirements: disclosure, data privacy and assessment boundaries.
Disclosure and Approval Conventions for Doctoral Work
PhD supervisors should set explicit rules and require written disclosure where use is allowed
Columbia’s policy states that, unless a course instructor clearly grants permission, using generative AI for assignments or exams is prohibited and any use must be disclosed.
UCLA encourages instructors to draft syllabus statements specifying whether AI can be used and how to cite it.
Leeds’s supervisor guidance instructs supervisors to inform postgraduate researchers about university policies, discuss which tasks fall into “green,” “amber” or “red” categories, and record agreed AI use in supervision meeting notes.
Princeton goes further in some units, requiring a generative-AI statement on the thesis cover page and a record of prompts and outputs in appendices, which gives supervisors a concrete template to request and review.
Stanford and MIT provide instructor resources for drafting course AI policies, which supervisors can mirror in supervision agreements and milestone forms.
PhD Supervisors should therefore:
Draft a course or supervision‑agreement clause specifying what AI use is permitted, requiring students to disclose and obtain written approval for any “amber” tasks.
Explain that absence of a statement means AI is prohibited, as per Columbia’s rule.
Include a statement of responsibilities: candidates must verify AI outputs, cite any significant AI contributions, and avoid using AI to generate final analytic or discursive content.
Data Privacy and Research Materials
Harvard IT cautions against entering confidential or regulated data into publicly available generative AI tools. Institutions also point staff and students to approved or supported services, for example campus-licensed tools and environments. Columbia’s policy references a vetted list of approved applications, while UC Berkeley has communicated supported options for campus use.
To safeguard data:
Require that sensitive or proprietary research data never be uploaded to external AI services.
Encourage use of institution‑approved or “sandboxed” AI tools for drafting and summarising, and ensure any AI use is recorded in meeting notes.
Instruct students to verify all AI‑generated text against primary sources and edit or rewrite as needed.
Summative Assessment and Viva Boundaries
Oxford’s 2025 policy on AI use in summative assessment requires those setting assessments to declare whether, and how, AI can be used for each task, to specify student declarations where use is permitted, and to identify suspected unauthorized use through normal marking or university-endorsed tools.
At the time of the last update, Oxford noted that no detection tools had university endorsement. These rules give supervisors clear parameters for dissertations, orals, and other summative components.
Doctoral assessment should make clear where AI use stops. Most top-tier institutions permit learning uses such as brainstorming or language clarity when disclosed, yet restrict AI for summative tasks that represent a candidate’s original analysis or claims. Supervisors can avoid ambiguity by naming prohibited use cases and showing what they look like in practice.
What counts as prohibited “AI drafting” in summative work
Generating or rewriting analytical paragraphs that make, expand, or restructure the argument in the analysis or discussion sections.
Producing summary or synthesis that substitutes for the candidate’s own reasoning in chapters that will be examined.
Using AI during a viva or any closed-book examination setting.
For an example of prohibited use, review the screenshot below that illustrates how the tool JenniAI proposes multi-sentence analytical rewrites that change meaning, structure, and emphasis.

JenniAI’s automated rewrite that changes structure and emphasis, which should be prohibited in summative chapters.
In supervision meetings, you can use examples like the above to explain why such output must not be used in summative chapters, and why any permitted support must be disclosed precisely.
As a PhD supervisor, you should also show candidates what an extensive, paragraph-level rewrite looks like so that “language help” is not confused with argumentative drafting. If AI is allowed for language clarity on a draft, the candidate must verify all wording and keep ownership of the claims, and disclose the use in the thesis front matter or methods.

Extended paragraph-level rewriting, which should not be used in viva contexts or summative sections of the thesis
The example above illustrates how the tool JenniAI displays long AI-generated paragraphs that rephrase and restructure candidate writing. This is another example of AI drafting that should be disallowed in viva settings and in analysis, discussion, or conclusions.
PhD Supervisor tip
Add a one-line reminder to milestone forms: “No AI-generated text is permitted in analysis, discussion, or conclusions. If limited AI support is allowed for language clarity, disclose it and keep version history.”
Build Your Supervisor Playbook (Templates)
Use the green–amber–red categorisation to align expectations. Here is a suggested table:

A supervisor-focused map of permitted, conditional, and prohibited AI uses in doctoral work, plus disclosure and privacy reminders.
Supervision Agreement Clause (Green / Amber / Red Tasks)
Use a simple category system aligned with institutional guidance. Oxford recommends category-level declarations by assignment and explicit student declarations where use is permitted, which you can adapt into a supervision agreement covering doctoral milestones.
For sample syllabus language to pattern your categories and wording, draw on instructor resources from UCLA and Stanford.
Green (permitted with acknowledgement): brainstorming, outlining, grammar or style suggestions on drafts, code scaffolds for non-assessed prototypes.
Amber (allowed only with prior written approval and disclosure): paragraph-level drafting in literature review, captions, non-sensitive data cleaning scripts, language polishing before submission.
Red (not allowed): drafting core analysis or discussion, writing the abstract or conclusions, any use during viva or other summative assessments, entering confidential or regulated data into public tools.
Thesis Disclosure Statements Supervisors Can Request
Princeton’s practice offers a model. Request a brief front-matter statement acknowledging permitted AI use and a methods or appendix note that lists tasks, tools, and verification steps. Where appropriate, ask candidates to keep a log of prompts and tool outputs.
If you allow limited, formative AI support, require a brief disclosure so examiners know what was done and where. Two common, permitted examples are language clarity checks and structured feedback on evidence coverage. The goal is transparency, version history, and candidate ownership of claims.
Use Case A: Language Clarity (permitted with disclosure):
If you allow readability checks on a draft, the PhD candidate can use a tool like thesify to view a reading score, then revise in their own words. All edits remain the student’s writing, and no AI-generated paragraphs are pasted into analysis or discussion.

Readability checks are a permitted, formative use when the student rewrites in their own words.
Use Case B: Evidence Coverage Review (permitted with disclosure):
If you allow formative feedback on how well a section supports a stated claim, the student can run an evidence check, note any gaps, and then rewrite manually. The AI output remains advisory. The student verifies citations and writes the revised text independently.
The screenshot below shows a safe, formative use of thesify’s feedback that flags unsupported claims and points the writer to add evidence. This is diagnostic feedback, not argumentative drafting, and is acceptable when disclosed.

Formative evidence coverage helps students identify gaps, then rewrite independently.
Short form (Acknowledgements or Preface)
“I used thesify for readability checks and formative evidence coverage feedback on draft sections. I reviewed all suggestions and rewrote the text myself. I did not upload confidential or regulated data. Any edits reflect my own words and reasoning.”
Long form (Methods or Ethics)
“With supervisor approval dated [DD Month YYYY], I used thesify for two formative tasks: (1) readability checks on draft paragraphs, and (2) advisory feedback on the alignment between my claims and cited evidence in [chapters/sections]. Tool outputs informed my own revisions. I independently verified all citations and did not paste AI-generated paragraphs into analysis, discussion, or conclusions. No confidential or regulated data were uploaded.”
Supervisor tip
Add a line to your milestone form: “AI may be used for formative feedback on clarity and evidence coverage only, with disclosure and version history.”
For more disclosure examples, read our post PhD AI Policies 2025. For information on academic journal policies, check out guide AI Policies in Academic Publishing 2025.

Detection With Care
Do not act on an AI score alone. Turnitin advises against using the AI writing score as a sole basis for decisions. Oxford adds that suspected unauthorized use should be identified through marking or through tools formally endorsed by the university, noting that none were endorsed at the time of its last update. Document concerns, ask for drafts and version history, and proceed under standard academic-misconduct processes.
Assessment Redesign and Feedback With AI
Redesigning tasks can reduce the risk of AI misuse and promote deeper learning:
Task Design and Integrity
Use clear AI use policy language in syllabi and supervision documents, then design tasks that require original synthesis and visible process. Our teaching with AI in higher education guide recommends prompts that require personal reflection, data analysis or synthesis of multiple sources.
UCLA also provides sample policy statements instructors can adapt. Stanford’s teaching resources include templates and workshop kits to help you set and communicate policy options, which supervisors can mirror in assignment briefs and milestone rubrics.
If you allow formative support, frame it as process evidence rather than a substitute for writing. Require students to submit a brief reflection on how they used feedback to revise, along with tracked-changes or version history. Summaries can help with reading comprehension, and structured prompts can surface gaps in claims or evidence. Students still own the words and the argument.
For example, the below screen shot shows AI use for evidence-aware revision. thesify’s evidence panel suggests checking whether a draft paragraph actually supports its stated claim. Students still add the sources and rewrite in their own words.

Advisory prompts help locate gaps without writing the paragraph for the student.
Next, show how actionable recommendations can drive targeted edits. For example, ask candidates to attach one to three items from their thesify feedback report and explain what they changed, where, and why.

Numbered recommendations support a revision plan that the student executes in their own words.
Require students to keep copies of feedback reports in their version history and to note any changes made as a result.
Case Study Ideas for Doctoral Milestones
Use practice vivas or methodology‑chapter workshops where students must explain their research decisions without AI assistance. Encourage PhD students to summarise AI support in their ethics or methods section.
For example, Oxford uses a particular structure for summative assessment to doctoral milestones. For each viva or summative submission, specify permitted or forbidden AI assistance and the form of student declaration, then collect process artifacts such as outlines and reading notes.
If you run a pre-viva rehearsal or a methods workshop, you can consider allowing AI summary views of published papers for reading comprehension on non-sensitive excerpts. The summary should serve as a scaffold to help the student check scope, keywords, and main claims before a live discussion. The viva still assesses the student’s own analysis and reasoning.

Summaries can support reading comprehension. The viva and summative writing still assess the student’s own analysis.
Feedback Loops with AI
When AI tools are permitted, ask students to document prompts and outputs. As a supervisor, you can then review these logs to understand how AI contributed and provide targeted feedback. This fosters transparency and reduces fear of hidden AI use.
Regional Context Supervisors Should Watch
EU AI Act Signals for Universities and Educators
The EU AI Act, adopted in 2024 with provisions taking effect in February 2025, classifies education as a high‑risk domain. It imposes mandatory transparency, accountability and fairness on AI systems and bans certain applications such as emotion detection.
Harvard’s HUIT advisory lists prohibited uses in the EU. These include an AI technology that:
Uses subliminal techniques to distort behaviour
Exploits vulnerable groups
Infers emotions in educational settings
Scrapes images for facial‑recognition databases
Categorises individuals by biometric data
Performs real‑time biometric identification for law enforcement
Performs social scoring
Performs predictive policing
While these restrictions apply directly to AI system developers and deployers, supervisors should be aware that research collaborations involving European partners or data subjects may fall under these rules. It is therefore prudent to:
Avoid using AI tools that infer emotions or collect biometric data from students.
Ensure research proposals involving AI meet the Act’s transparency and fairness requirements, including documentation of data sources and model logic.
Offer AI literacy training to research teams to satisfy the Act’s requirement for staff competence.
Next Steps for Professors and PhD Supervisors on AI Policy
Supervisors set the tone for responsible AI use in doctoral research. Clear AI-use policies, written approvals and transparent disclosure protect students and institutions alike. By prohibiting AI in summative assessments, preserving data privacy and embracing the green–amber–red framework, supervisors can offer consistent guidance across programmes. Evidence‑based detection practices and thoughtful task design create a culture of trust rather than surveillance.
As regulations such as the EU AI Act reshape the educational landscapes, staying informed and adapting policies will be essential. For deeper dives into institutional rules or student perspectives, explore our thesify’s blog and join thesify’s Substack for continuous updates.
Try thesify in your supervision workflow
Curious how a candidate’s chapter reads to an examiner? Run a safe excerpt through thesify to get formative feedback on clarity, structure, evidence coverage, and potential citation gaps. Use a copy of the draft, avoid uploading confidential or regulated data, and bring the notes to your next supervision meeting.
Related Posts
PhD AI Policies 2025: A Doctoral Researcher’s Guide: This guide distills thesify’s October 2025 update on generative AI policies and incorporates guidance from universities and graduate schools around the world. It is tailored for PhD students, emphasising thesis writing, supervision and research integrity. You’ll learn why AI policies matter for PhD students, what changed in 2025, the rules you must follow, how to disclose AI use and how to safeguard your data. By the end, you’ll have the confidence to harness AI tools responsibly without compromising your degree.
University AI Policies 2025: New Rules and Rankings: Unacknowledged AI‑generated content can constitute plagiarism. Entering sensitive or proprietary data into public AI tools may violate privacy regulations. Over‑reliance on AI can undermine students’ ability to think critically and articulate ideas. Understanding these risks explains why most universities treat AI use like assistance from another person and emphasise transparency. This October 2025 update compares generative AI policies at top universities, highlighting new rules, disclosure requirements and responsible use guidance.
Teaching with AI in Higher Education (2025) – Insights for Professors | thesify: By designing curricula that intentionally engage with AI tools, professors prepare students for a future where fluency in AI literacy is vital. This guide explores what the latest HEPI Student AI Survey means specifically for professors. You’ll discover best practices for integrating AI into your curriculum, redesigning assessments to deter misuse, and bolstering your own AI literacy—ensuring you're prepared to meet the challenges and opportunities presented by this transformative technology.


