Creating a department policy on AI use
First steps
Guidelines from the university’s AI Task Force provide important considerations on appropriate use of generative AI. By design, they leave many decisions to faculty, departments, and schools. Each discipline has different goals and expectations, and faculty have widely divergent views on the ethics and use of generative AI tools. At a minimum, department leaders should clarify two things, though:
1. What type of policy they want to pursue. This often comes down to one of three approaches.
- Uniform policy. A uniform approach would provide consistency across courses and provide clarity to students. Reaching agreement could be difficult, though, and an overly strict policy could restrict instructor and student experimentation.
- Flexible guidelines. These could build on the AI Task Force guidelines and address additional elements related to teaching and learning. Or they could simply stipulate that each class must have an AI policy and that each instructor should discuss appropriate use of generative AI. This would provide less clarity for students but would make consensus among faculty easier.
- No policy. The department or school would point students, faculty, and staff to the guidelines from the AI Task Force.
Regardless of a department’s approach, every course should have clear guidelines on AI use and instructors should have open and frequent conversations with students about the ethical use of generative AI tools.
2. How their curriculum is preparing students for an AI world. Regardless of faculty members' views on generative AI, students need opportunities to learn about how it works, how it can be used ethically and effectively, why it is fraught with ethical issues, and how it is affecting disciplines, careers, and society. That doesn’t mean every class must use AI. Students need ways to gain experience in using and making choices about AI, though. They also need to understand how their courses help them build the human skills that will allow them to thrive even as organizations integrate AI into jobs. (See Human skills to emphasize below.)
Getting started on guidelines
We suggest that departments look at AI guidelines as an opportunity to discuss and shape the future of their disciplines and their curricula. That includes how AI is changing jobs, what skills undergraduates and graduate students will need in the future, and how AI can provide new methods of inquiry and new approaches to working with information. They should also consider what isn’t changing and what skills or principles they want to maintain. Two CTE overviews of research into generative AI in education and AI trends can help with these discussions.
These conversations will no doubt be challenging, and the most fruitful approach would be to have a group of faculty members and students create a draft of guidelines for the faculty as a whole to consider. The voices of undergraduates and graduate students will be important in the conversations.
Here are some things for that group to address:
Agree on guiding principles. These should be shared values that guide the work of faculty, staff, and students. Guiding principles help set the tone of discussions and help resolve conflicts that may arise. (See Examples of guiding principles below.)
Draw on existing policy. Rather than creating something new, consider whether existing policies or guidelines can be adapted. This includes policies on privacy, ethics, and academic integrity.
- Include elements from university guidelines. Several elements from the AI Task Force guidelines are worth reinforcing, including those on assuming responsibility for the integrity and quality of individual work, avoiding use of generative AI to spread misinformation or disinformation, and disclosing use of generative AI when appropriate.
Consider the future. How is AI use affecting your discipline now, and what might the discipline look like in five or 10 years? How might AI reshape jobs and careers? How can departments help students prepare for that future? What disciplinary or ethical principles can help guide the discipline as it grapples with use of generative AI?
Consider the needs of students. Large percentages of students and graduates say they feel ill-prepared to handle the use of AI in their careers because their schools provide little or no guidance. How can departments help students learn about generative AI? How might it change what and how students learn? How will your courses prepare students to work with generative AI ethically and effectively? And how will students gain the crucial human skills that AI lacks? Students should be included in these discussions.
Remember the needs of graduate students. Generative AI has become an important method for graduate students to learn. It influences how they search for, analyze, and present information. It can also help them take on and find solutions to bigger problems and challenges. How will your department integrate these skills into your curriculum?
Consider what inappropriate use of AI involves. AI helps run every computer, tablet, and smartphone. Generative AI is integrated into word processing programs, spreadsheets, and presentation tools. Spellcheck and grammar tools use AI. Grammarly, for instance, has nearly all the capabilities of ChatGPT. Search engines have long used AI and have now integrated generative AI into routine results. The point is that AI use is difficult to define and often difficult to separate from anything created with digital tools. Any policy must consider those complexities in addressing appropriate and inappropriate use of AI. (See Using AI ethically in writing assignments for additional considerations.)
Consider academic integrity. Any guidelines or policy should include approaches to handling misuse of AI. AI detectors are unreliable and do not provide proof of student misconduct. How then will instructors determine what has been generated by AI and what has been created by students? What is proof of AI use? How will instructors decide when and whether to pursue an academic misconduct case? Those are challenging issues, but discussing them upfront will save much time and anguish later on. (See Considerations on academic integrity below.)
Broader steps to take
Once a committee and department leaders have agreed on some of those foundational issues, they should work on some additional areas:
Integrate AI literacy. This is essential for helping students interact with generative AI ethically and effectively. There is no universal definition of AI literacy, but it generally includes background on what AI is and how generative AI works; how to use AI ethically and effectively; how to work through the many weaknesses and ethical issues of generative AI; and how AI is shaping the future of jobs and society. CTE has created Canvas modules that instructors can integrate into their own courses. The AI resources page on the CTE website also has helpful material.
Use of AI for learning. Generative AI is being integrated into Canvas and many other digital tools, and it can help with student engagement and provide new ways to learn. How might departments take advantage of these tools in courses, class preparation, assessment, and other aspects of teaching and learning? How can they encourage instructors to experiment with assignments and other aspects of classes to help identify new approaches to learning with generative AI?
Emphasize human skills. As use of generative AI grows, human judgment and human skills become more important. These human skills (sometimes called soft skills) are often hidden, even though they are at the heart of nearly everything we do. How can departments make those skills more visible in the curriculum so that students understand their importance and application? (See Human skills to emphasize below.)
Work at building trust. One reason generative AI has become so problematic is that the bonds of trust between students and faculty have frayed. Many instructors assume that students will cheat, and many students don’t see the value in the work assigned in their classes. Building trust requires repairing that disconnect and helping students better understand the purpose and importance of assignments. It also requires creating a department culture in which students feel part of a community. That takes time and patience. Guidelines on AI use are worthless without trust, though.
Keep talking
Any AI-related guidelines or policy must be flexible enough to give faculty and students room to experiment but clear enough to establish some boundaries. That will be crucial as AI tools evolve in the coming years. The steps above can help guide discussions, but academic leaders, faculty members and students must continually revisit guidelines and adapt them as necessary. Consider those discussions an opportunity to keep teaching and learning vibrant.
Departments should ground any policy or guidelines in share principles. Here's an example.
The suggestions in this document are grounded in KU’s values and strategic plan, and in the work of KU faculty members. These include a commitment to integrity, respect, innovation, stewardship, and excellence. They are aligned with KU’s mission to educate leaders and make important discoveries. They are also grounded in the idea of KU as an exceptional learning community that values belonging, and in principles articulated by KU faculty members who have done work in generative AI.
- Courses and curricula should be driven by pedagogical approaches that emphasize curiosity, discovery, and applied learning.
- Human interaction is a core component of a KU education.
- Inquiry and experimentation are core activities for faculty, staff, and students.
- The department must adapt proactively to changes in society, community, and technology to support student preparation for careers and lives as citizens, voters, and consumers.
- Faculty, staff, and students must have safe, effective, and transparent technological systems that protect privacy and prevent algorithmic discrimination.
- All members of the department community should follow common ethical principles in their work. These include academic integrity, honesty, a sense of fairness and justice, respect for others, responsibility for one’s own actions, and pursuit of a greater good.
- Faculty, staff, and students should collaborate on shaping meaningful and ethical use of generative AI in academic and professional work.
- “By constantly raising the bar and meeting new challenges, we will ensure that our students receive an education that goes beyond the standard.”
- We must be proactive in adopting meaningful change rather than reactive to constant technological disruptions.
We can't out-detect generative AI. Detection tools are unreliable, and students can easily fool those tools with minimal effort. Research has been consistent that AI detectors are far more likely to flag the work of students for whom English is not a first language. Even Turnitin suggests caution with the system. The Turnitin tool will make mistakes, and “that means you'll have to take our predictions, as you should with the output of any AI-powered feature from any company, with a big grain of salt,” David Adamson, an AI scientist at Turnitin, says in a video. “You, the instructor, have to make the final interpretation.”
Relatedly, a joint task force of the Modern Language Association and the Conference on College Composition and Communication has said that AI detectors are very likely to alienate students. Prominent use of – and misinterpretation of results from – these detectors puts students on the defensive and establishes a culture of contention rather than a culture of collaboration.
Many faculty members and graduate teaching assistants have made incorrect assumptions about the scores from AI detectors, which can be misleading at best. Even small percentages of false positives mean that dozens of students are falsely accused of academic misconduct each semester. (The Temple University teaching center does a good job of explaining the weaknesses in the tool, based on testing it did with a variety of texts.)
At the same time, we must recognize that generative AI has added time, stress, and uncertainty to the lives of faculty members, who must determine the authenticity of student work. Completion of courses and degrees signals particular competencies that students have gained – but only if the work is done with integrity. Academic integrity is not the sole responsibility of individual faculty members. Students, who join the University community as learners, must be part of ensuring academic integrity, as must everyone at the University. That is yet another reason that students need to be part of discussions about AI guidelines and use.
Generative AI has little value unless students gain strong foundational skills in their chosen disciplines. Those skills provide the means to critically evaluate the output of generative AI through the lens of their disciplines but also through the lenses of ethical responsibility and credibility.
In addition, all of us who work with students must maintain a human connection in courses and curricula and help students gain uniquely human skills (often called soft skills). Faculty generally see the importance of these skills but rarely include them among course goals. Human skills are very likely to increase in value as organizations adopt and use AI tools, and each discipline should emphasize the importance of these skills and identify how and where in their curricula students can learn them. Here are some of the most prominent human skills. The MIT Jameel World Education Lab lists many others.
- Creativity and innovation: The ability to generate new ideas and solutions.
- Collaboration: The ability to work effectively with others.
- Communication: The ability to clearly convey information and ideas.
- Empathy: The ability to understand and care about the feelings of others.
- Compassion: The ability to show kindness and a willingness to help others.
- Critical thinking: The ability to analyze information objectively and make reasoned judgments.
- Problem-solving: The ability to find solutions to complex issues.
- Adaptability: The ability to adjust to new conditions.
- Digital literacy: The ability to evaluate and use information and technologies.
- Ethical decision-making: The ability to make decisions based on standards and principles.