Jul 24, 2025
Written by: Alessandra Giugliano
In just one year, teaching with AI in higher education has become essential, with generative AI usage among UK university students surging dramatically—from 66% to 92%, according to the 2025 HEPI Student AI Survey. With 88% of students now employing AI tools in academic work, higher education faces an unprecedented shift.
For professors, these changes prompt critical questions: How should teaching practices adapt to pervasive AI use? What steps must faculty take to maintain academic integrity without stifling innovation? And how can universities close the widening digital divide revealed by differing student experiences with AI?
This guide explores what the latest HEPI survey means specifically for professors. You’ll discover best practices for integrating AI into your curriculum, redesigning assessments to deter misuse, and bolstering your own AI literacy—ensuring you're prepared to meet the challenges and opportunities presented by this transformative technology.
Faculty AI Literacy: Bridging the Knowledge Gap
Despite rapid student adoption of generative AI, faculty preparedness to integrate these tools into teaching has lagged. While 42% of students now perceive faculty as "well-equipped" to support their use of AI—up from just 18% in 2024—the reality is that many professors remain cautious or uncertain about effectively utilizing AI tools in their courses.
The HEPI 2025 Student AI Survey clearly indicates that students actively seek guidance from professors who understand and confidently use AI. However, a considerable number of faculty still rely only minimally on these technologies, largely due to a lack of targeted training or clear institutional support. That’s why faculty AI literacy must become a strategic priority.
If you're wondering where to begin, one practical starting point is understanding how to recognize and manage AI-generated content in student work. Our article How Professors Detect AI in Academic Writing: A Student’s Guide offers insight into common detection tools, their limitations, and how to approach AI use constructively in the classroom.
To improve faculty training for AI in higher education and close this literacy gap, consider:
Attending faculty-led or institution-sponsored AI workshops
Forming collaborative learning groups with colleagues
Experimenting with AI tools for generating lesson plans, rubrics, or formative feedback
Publishing or presenting case studies from your teaching practice at university teaching and learning conferences contributes directly to a growing body of academic knowledge on AI in education. To explore practical approaches, read our detailed guide AI Academic Writing Ethics–How Professors Can Teach Responsible AI Use.
Ultimately, becoming proficient in generative AI is now a necessity. As more students adopt AI tools to explain concepts, summarize readings, and draft assignments, your ability to guide them responsibly and confidently is essential. Developing these skills now ensures you're prepared to lead, not just react to, the future of AI-integrated teaching.
Adapting Curriculum and Instruction for the AI Era
Generative AI tools like ChatGPT have become commonplace in classrooms, thus traditional teaching methods and curricula demand thoughtful reconsideration. Rather than attempting to restrict student interaction with these powerful technologies, educators can enhance learning outcomes by intentionally integrating AI into course design. By shifting the emphasis from restriction to integration, you can transform AI from a disruptive force into a valuable educational partner.
Incorporating AI Tools as Learning Aids
Professors can begin by explicitly allowing—and even requiring—the responsible use of AI in preliminary research, idea-generation phases, and initial drafts. For instance, students might use generative AI to create summaries of complex readings or propose preliminary research hypotheses. Class time can then be dedicated to more sophisticated, higher-order activities such as critically analyzing AI-generated insights, refining arguments, or debating the strengths and weaknesses of algorithmically-produced content.
A practical classroom activity might involve students generating initial essays or reports using an AI writing tool, then actively critiquing and revising the AI's output to strengthen analytical skills. Assignments structured this way encourage students to see AI not as a shortcut but as a starting point for deeper intellectual engagement.
Personalizing Learning with AI
One of the most exciting opportunities for AI in higher education is its ability to personalize learning experiences. With AI-driven analytics, you can better understand where individual students are struggling and deliver tailored resources or support. For instance, adaptive tools can highlight comprehension gaps or writing weaknesses, enabling you to provide timely, targeted interventions that improve student outcomes.
This level of personalization not only increases engagement but also promotes equity in your classroom—ensuring all students, regardless of background or confidence with AI, benefit from its capabilities.
To design curricula that support personalized, ethical AI use, consider building assignments that require students to use AI tools reflectively and transparently. For practical ideas, our article 9 Tips for Using AI for Academic Writing (Without Cheating) offers faculty-focused guidance on incorporating AI into coursework in ways that maintain academic integrity.
Rather than avoiding AI altogether, you can intentionally structure activities that encourage students to think critically about AI-generated outputs, compare them with human reasoning, and refine their work accordingly. By embedding AI curriculum integration thoughtfully, you're building digital literacy and preparing students for the evolving demands of academic and professional life.
Emphasizing Critical Thinking over Memorization
Given that generative AI tools like ChatGPT, Google Gemini, and others are now widespread, it’s no longer sufficient for university curricula to focus on memorization-based assessments. Students can now access instant factual answers through AI platforms, making traditional rote-recall exams increasingly ineffective and outdated.
Instead, you should consider how to design AI-resistant assignments that emphasize higher-order thinking. This includes open-book exams, real-world case studies, and scenario-based questions that require students to analyze, interpret, and synthesize complex ideas. These types of assessments not only foster deeper engagement but also align with what AI cannot do—apply human judgment, reason through ambiguity, and generate original, critical insights.
An expert panel at a recent Ohio University symposium echoed this need for transformation, urging faculty to intentionally shift learning objectives away from information regurgitation and toward critical thinking, creativity, and academic independence. By focusing on the skills that remain uniquely human, you prepare students to excel in a world where AI is a given—not a threat.
If you're unsure how to start, we recommend reviewing our comparison of common AI writing tools in Jenni AI vs. Google Gemini: Comparing AI Tools for Academic Writing and Avoiding Cheating. The article highlights how students interact with these tools and offers insights into which features support ethical AI use—and which ones may require closer scrutiny. It's a helpful resource when deciding how to incorporate or limit specific platforms in your course design.
By revising your curriculum with these goals in mind, you can confidently adapt college teaching for AI while reinforcing academic integrity and intellectual development in your classroom.
AI in Student Assessments
The rapid adoption of generative AI among UK university students, highlighted by the 2025 HEPI Student AI Survey, is reshaping traditional assessment methods. With 40% of surveyed UK students agreeing that AI-generated content could achieve a good grade in their subject, faculty are increasingly concerned about maintaining academic rigor and integrity. Rather than engaging in a reactive "AI-detection arms race," professors are advised to proactively redesign assessment strategies.
Designing Assignments to Prevent AI Cheating
In-Class or Oral Assessments
One immediate solution is to incorporate in-class or oral evaluations. Such assessments clearly reflect students' own knowledge and skills without AI assistance. While entirely preventing AI use outside controlled environments is challenging, professors can regain confidence in grading accuracy through live presentations, oral exams, or supervised written tests.
Applied Projects and Portfolios
Another robust alternative is the shift towards applied projects and ongoing portfolios. These approaches emphasize authentic, incremental learning and are difficult for AI to replicate convincingly. For instance, professors might require reflective journals, hands-on lab experiments, or iterative project reports with periodic checkpoints. Such assessments offer clear insights into students' genuine abilities and understanding over time.
AI-Allowed Assignments
Educators worldwide are exploring alternative assessments driven by emerging technological, societal, and pedagogical trends. Generative AI is now a major catalyst in making traditional exams rapidly outdated. Yet professors needn't compromise educational rigor; proactively redesigned assessments allow for maintaining both integrity and pedagogical excellence.
A particularly forward-thinking approach is to permit students to use AI tools in specific assignments—but grade them on how they engage with, document, and critically reflect upon their AI usage.
For example, you might ask students to use an AI writing assistant to help develop a draft paragraph or strengthen an argument, and then submit a reflection explaining:
What the AI suggested
Which suggestions they accepted or rejected, and why
How the AI-supported version improved (or didn’t) their thinking
This kind of task encourages students to actively engage with AI as a learning partner, not a shortcut. It also reinforces skills like source evaluation, citation awareness, and analytical reasoning—skills AI can’t develop for them.
To support assignments like these, professors can integrate AI tools designed specifically for academic integrity. For instance, thesify’s AI Writing Assistant provides feedback on evidence use and interpretation rather than writing full paragraphs, making it ideal for helping students improve their critical thinking without doing the work for them.

In the example above, thesify highlights areas where a student’s claim lacks citation and prompts them to provide stronger evidence and analysis. An assignment might ask students to run their draft through thesify, revise based on its feedback, and then explain how they strengthened their paragraph as a result. This approach gives you visibility into their learning process and ensures transparency.
Using assignments that encourage students to critique AI outputs, reflect on revisions, and apply academic standards, reduces the risk of misuse while giving students hands-on experience with tools they’ll likely encounter in future academic or professional settings.
Maintaining Academic Integrity without an AI Witch Hunt
In navigating the proliferation of AI tools, professors understandably fear increased academic misconduct. The 2025 HEPI survey underscores a key insight: UK students' primary hesitation around AI use is fear of being accused of cheating. Rather than adopting punitive stances, HEPI strongly advocates nuanced institutional AI policies that balance trust with sensible verification.
Transparent Communication of AI Policies
Clear, consistent communication is essential when setting expectations around AI use. According to the 2025 HEPI survey, students often report confusion about what’s allowed—especially when policies vary between courses. To foster a transparent learning environment, consider outlining acceptable AI practices in your syllabus and discussing them in class. For example, clarify whether students can use AI for brainstorming, outlining, editing, or generating content—and if so, under what conditions and with what attribution.
One simple way to promote openness is to require students to submit their AI usage history or documentation alongside assignments. Tools like thesify’s AI Writing Assistant make this process easy: students receive downloadable reports summarizing their revision process, source usage, and types of feedback received.

The example above comes from thesify’s downloadable report feature, which students can attach to their final submission. The report shows how their writing evolved, where evidence was added or improved, and whether key academic standards were met. This kind of transparency builds trust while giving you a clear picture of how AI supported, not replaced, the student’s thinking.
Making reflection and documentation part of the assignment itself helps students engage more ethically with AI and reinforce your course’s integrity standards. It also gives them valuable practice in explaining their process—an important academic skill in its own right.
Limitations of AI-Detection Tools
While AI-detection software, such as Turnitin’s AI detection capabilities, has grown in prevalence, reliance solely on these tools is problematic. Students are acutely aware of detection tools' limitations, especially their susceptibility to false positives, as documented in thesify's exploration of how professors detect AI-generated content. Faculty should use detection technology judiciously, viewing it as complementary rather than decisive evidence in academic integrity matters.
Cultivating a Culture of Ethical AI Use
The optimal response combines clear policy with proactive educational practices. Professors can significantly mitigate misconduct by fostering environments that normalize transparent and ethical AI usage.
Explicitly encouraging honest disclosure, offering practical training on responsible use, and openly discussing ethical standards around AI can effectively deter misuse. For further guidance on clearly communicating these standards, see our syllabus-focused resource, From AI Panic to Proactive Policy.
By embracing transparency and education over surveillance and penalties, universities foster trust, reduce student anxiety around AI use, and ultimately uphold academic integrity without hindering educational innovation.
Policy Development and Guidance: Beyond Bans towards Balanced AI Use
According to the 2025 HEPI Student AI Survey, institutional AI policies in UK universities are becoming clearer, with 80% of students confirming their institution now has explicit guidelines on generative AI usage. Despite this positive development, HEPI highlights an ongoing ambiguity in how these policies are communicated—less than a third of students reported that their university actively encourages AI use, while a similar proportion said it's discouraged or outright banned. This suggests significant room for improvement in terms of policy clarity and consistency.
Clarity and Consistency in AI Policies
Faculty play a critical role in clarifying and reinforcing university AI policy. Professors should ensure syllabus statements explicitly align with official university guidelines. Clearly indicate when AI use is actively encouraged, permitted with proper attribution, or strictly prohibited.
Ambiguity fuels confusion, inadvertently increasing instances of misuse or academic misconduct. Faculty-driven clarity can help address the uncertainty reported by UK students who feel they receive mixed messages from their institutions, as highlighted by HEPI.
Policy with a Purpose: Moving Beyond Punitive Approaches
HEPI strongly advocates that universities should abandon purely punitive AI approaches in favor of nuanced policies that recognize the inevitability and potential benefits of AI. Policies reflecting this balanced approach acknowledge that generative AI can enhance learning when employed ethically and transparently.
For instance, policies can explicitly permit AI in preliminary research stages or idea generation but clearly prohibit submitting unacknowledged AI-generated content as one's own. Faculty can champion these balanced approaches within academic committees, contributing directly to policy refinement that addresses both innovation and integrity.
Looking for AI policy examples for university professors? Check out our post: Generative AI Policies at the World's Top Universities
Faculty as Communicators and Leaders in AI Policy
Professors should proactively communicate AI policy, ideally beginning on the first day of class and regularly reinforcing throughout the term. Openly discussing AI ethics, showcasing appropriate use-cases, and transparently outlining acceptable boundaries significantly reduce confusion among students. The 2025 HEPI findings emphasize that students desire more support in managing AI ethically; professors who model transparent and responsible AI practices provide invaluable guidance, ensuring alignment between student and faculty expectations.
Faculty engagement also aids in closing the digital divide in education, another key concern underscored by the 2025 HEPI report. Clear, inclusive communication of AI expectations and practical demonstrations ensure students from varied socio-economic backgrounds feel equally equipped to use these transformative tools.
HEPI’s 2025 policy recommendations further suggest universities invest actively in training educators: currently, just 42% of UK students feel their lecturers are well-equipped to handle AI. By embracing faculty AI literacy training, professors can effectively lead AI policy implementation and confidently guide students, fostering a collaborative, informed educational environment.
Closing the AI Digital Divide: Ensuring Equity and Access
One of the most concerning insights from the 2025 HEPI Student AI Survey is the clear and widening digital divide among UK students. The survey revealed that students who are male, come from wealthier backgrounds, or study STEM disciplines consistently demonstrate greater confidence and frequency in using AI tools. Conversely, female students, students from lower-income backgrounds, and those in humanities subjects show notably lower levels of comfort and experience with AI.
Without intentional interventions, this disparity threatens to deepen existing educational inequalities. As a professor, you have a critical role in proactively addressing these gaps, ensuring all students can equally leverage AI benefits.
Inclusive Training Opportunities
Given that the 2025 HEPI survey reported only 36% of students have received institutional support for AI skills, you can take immediate steps to bridge this gap by offering optional, inclusive training workshops. Such sessions could cover essential skills like ethical usage of AI writing tools, proper citation practices involving AI-generated content, and interpreting AI outputs critically.
For instance, you might host a session specifically focused on “Integrating AI tools into university curriculum”, teaching students how to ethically use and critically evaluate outputs from popular tools like ChatGPT or thesify. This approach is particularly valuable for students who are unfamiliar or uncomfortable with AI technologies, providing them with a supportive learning environment to build their skills and confidence.
Integrating AI Skills into Coursework
Another powerful method to close the digital divide is explicitly making faculty AI literacy and student AI competencies learning outcomes within your course structure. By embedding AI skills directly into your assignments, you ensure every student receives practical, hands-on experience.
You might consider creating a small, introductory assignment where students are required to use a specific AI tool to perform preliminary research or draft outlines. Afterwards, students should analyze the tool’s strengths and limitations—an exercise that helps normalize responsible AI use for all learners. For guidance on implementing this in your classes, see our dedicated resource on How to Adapt College Teaching for AI.
Providing or Recommending Accessible Tools
The 2025 HEPI findings highlighted a significant mismatch between student expectations and institutional actions: 53% of students believe universities should provide AI tools, yet only 26% confirm this actually happens. Addressing cost barriers is crucial to ensuring equitable access to generative AI technologies. You can advocate within your university for campus-wide licenses for academically appropriate AI software.
A practical step you can take is recommending or requesting institutional access to academic-focused platforms such as thesify’s writing assistant. These vetted, reliable tools provide a structured way for students to engage ethically with AI. For a comparative analysis demonstrating the benefits of using specialized academic software versus generic tools, read our detailed breakdown on thesify vs. ChatGPT.
By actively addressing resource constraints through institution-supported tools, you play a significant part in preventing socio-economic status from dictating who benefits from AI-driven education.
In summary, ensuring equity in AI access is not merely advisable—it is an essential responsibility of higher education teaching in 2025. By proactively providing inclusive training, deliberately embedding AI literacy into your curriculum, and advocating for institutionally provided tools, you help ensure no student is left behind as generative AI continues to reshape higher education.
Conclusion: Embracing AI to Enhance Teaching (Not Replace It)
As the 2025 HEPI Student AI Survey and subsequent analysis make clear, teaching with AI in higher education is now a central and inevitable element of UK higher education. While the challenges—such as academic integrity, curriculum adaptation, and closing the digital divide in education—are significant, so too are the opportunities for transformation.
AI as an Opportunity for Reinvention
At a recent Ohio University symposium, educational experts urged professors to view the widespread adoption of generative AI not as a threat, but as a valuable opportunity for reinvention. This moment allows you, as educators, to reimagine and enhance traditional teaching practices. By proactively improving your own faculty AI literacy, thoughtfully integrating ChatGPT into university curriculum, and creatively redesigning student assessments, you can ensure that education remains human-centered, relevant, and highly effective.
A Call to Action for Professors
Ultimately, AI’s purpose in education is not to replace professors but to amplify and enrich your teaching and mentorship capabilities. Quality higher education will always be centered around meaningful human interaction, critical thinking, and creative problem-solving—areas where professors uniquely excel.
Here’s how you can begin today:
Update just one assignment to explicitly include critical analysis of AI outputs.
Attend or organize a single AI workshop to improve your professional comfort with these technologies.
Have a conversation with colleagues about AI in teaching, sharing your experiences and exploring new ideas collaboratively.
Update your syllabus or course outlines to clearly communicate to students approved and effective use of AI. Check out our guide for tips on updating your syllabus to better support response AI use.
By taking these modest yet meaningful steps, you are directly shaping the future of higher education. The integration of AI technologies, responsibly guided by dedicated educators like you, promises not only continuity but significant improvement—enabling richer student experiences, more effective instruction, and deeper intellectual growth. The future of higher education, shaped by AI, rests confidently in your hands.
Ready to Lead the Way in Ethical AI Integration?
Try thesify’s AI Writing Assistant—the academic-focused tool designed to help your students use AI transparently, cite responsibly, and improve writing skills authentically.
Related Posts
Insights from 2025 Student AI Survey on Generative AI Tools: Discover key student insights from HEPI’s 2025 AI usage survey on generative AI tools. Learn how university students use AI, navigate ethical concerns, and build digital literacy.
From AI Panic to Proactive Policy: A syllabus roadmap to help professors craft academic writing AI tool guidelines for ethical use and academic integrity. Understand why a syllabus policy on responsible AI academic tools is necessary to maintain trust and AI academic integrity in your classroom.
AI Academic Writing Ethics– How Professors Can Teach Responsible AI Use: By following best practices and sidestepping common errors, professors can create a learning environment that upholds integrity and equips students with valuable digital literacy skills. Remember, your goal is to produce graduates who can navigate a world full of AI ethically and effectively.