Adapting your course to artificial intelligence


Robots holding papers as they leave a lecture hall

The increasing use of applications that draw on generative artificial intelligence has led to intense discussions about the growing capabilities of generative AI and how it might change jobs, disciplines, societies, and even the way we think. Artificial intelligence has been evolving for decades, but most educators have done little to consider how these new digital tools might change teaching, learning, and student expectations. For instance:

  • What do we need to know about generative AI platforms and how our students might use them?
  • How might AI change how we teach, what we teach, what we assign students to do, and how we assess student work?
  • What ethical issues do we need to address as we consider how AI might change the way we and our students think and act?
  • The algorithmic assumptions and decisions behind AI software are hidden, so how do we evaluate potential new tools and learning platforms?

There are no definitive answers to those questions, but the resources we have collected about generative AI are intended to provide guidance and ideas on how to approach these tools in your courses.

Some things to think about with AI

Talk with them about your expectations and how you will view (and grade) assignments generated solely with artificial intelligence. Emphasize the importance of learning and explain why you are having them complete the assignments you use. Why is your class structured as it is? How will they use the skills they gain?

That sort of transparency has always been important, but it is even more so now. Students intent on cheating will always cheat. Some draw from archives at greek houses, buy papers online, or have friends do the work for them. ChatGPT and similar tools can provide just another means of avoiding the work that learning requires. Helping students understand how a course helps their learning will win over some of them, as will flexibility and choices in assignments. This is also a good time to emphasize the importance of human interaction in learning, something artificial intelligence lacks.

As fluent as ChatGPT often seems, its answers rarely delve beneath the surface of a topic. It makes mistakes. It makes things up. Its responses provide no clues about how it is programmed or why it provides the answers it does. A Princeton researcher called it a "bullshit generator" because it creates plausible arguments without regard for truth.

All of that makes it a valuable teaching tool, though. By having students probe for answers, we can help them improve their skepticism, challenge assumptions, and question information. By having them fact-check, we can help them understand the dangers of fluid writing that lacks substance or that relies on fallacies. By having them use ChatGPT or other AI tools for early drafts, we can push them to ask questions about information, structure, and sources. By having them apply different perspectives to ChatGPT's results, we can help broaden their understanding of points of view and argument.

Many students are already using ChatGPT in their school work. We can no more ban students from using artificial intelligence than we can ban them from using phones or calculators or laptops. We need to talk with students about how to use ChatGPT and other AI tools effectively and ethically, though. No, they should not take AI-written materials and turn them in for assignments, but yes, they should use AI when appropriate. Businesses of all sorts are already adapting to AI, and students will need to know how to use it when they move into the workforce. Students in K-12 schools are using it and will expect access when they come to college. Rather than banning ChatGPT and other AI tools or fretting over how to police them, we need to change our practices, our assignments, and our expectations. We need to focus more on helping students iterate their writing, develop their information literacy skills, and humanize their work.

One way to help with that is reflection, which helps students develop their metacognitive skills. It can also help them understand how to integrate AI into their learning processes and how they can build and expand on what AI provides. Reflection can also help reinforce academic honesty. Rather than hiding how they completed an assignment, reflection helps students embrace transparency.

ChatGPT also requires new ways of thinking, and having students reflect on their use of it could prove valuable. For instance, ChatGPT rarely produces solid results on the first try, and it often takes several iterations of a question to get good answers. Sometimes it never provides good answers. That makes it much like web or database searching, which requires patience and persistence as you refine search terms, narrow your focus, identify specific file types, try different types of syntax and search operators, and evaluate many pages of results. Add AI to the expanding repertoire of digital literacies students need. (Teaching guides and e-books are already becoming available online.)

ChatGPT's ability to create work as good as many early undergraduates means we will have to rethink assignments and curricula. For instance:

  • Create assignments in which students start with ChatGPT and then have discussions about strengths and weaknesses. Have students compare the output from AI writing platforms, critique that output, and then create strategies for building on it and improving it.
  • Use multistep, scaffolded assignments with feedback and revision opportunities.​
  • Emphasize assignment dimensions that are (currently) difficult for AI: synthesis, student voice and opinions​.
  • Use project-based learning.​

Anne Bruder offeres additional suggestions in Education WeekEthan Mollick does the same on his blog, and Anna Mills has created a Google Doc with many ideas (one of a series of documents and curated resources she has made available). Paul Fyfe of North Carolina State provides perhaps the most in-depth take on the use of AI in teaching, having experimented with an earlier version of the ChatGPT model more than a year ago.

Students often struggle with questions to ask for research papers or topics to pursue for projects. ChatGPT's responses to questions are often bland and shallow, but it can also suggest ideas or solutions that aren't always apparent. It can become a partner, of sorts, in writing and problem-solving. It might suggest an outline for a project, articulate the main approaches others have taken to solving a problem, or provide summaries of articles to help decide whether to delve deeper into them. It might provide a counterargument to a position or opinion, helping strengthen an argument or point out flaws in a particular perspective. We need to help students evaluate those results just as we need to help them interpret online search results and help them interpret media of all types. ChatGPT can provide motivation for starting many types of projects, though.

ChatGPT is incapable of acknowledging its biases, although it has many. Maria Andersen, a teacher and creator of a technology startup, said that it had a white, male view of the world. Maya Ackerman of Santa Clara University told The Story Exchange: "People say the AI is sexist, but it's the world that is sexist. All the models do is reflect our world to us, like a mirror." ChatGPT has been trained to avoid hate speech, sexual content, and anything OpenAI considered toxic or harmful. Others have said that it avoids conflict, and that its deep training in English over other languages skews its perspective. Some of that will no doubt change in the coming months and years as the scope of ChatGPT expands. No matter the changes, though, ChatGPT will live in and draw from its programmers' interpretation of reality. Of course, that provides excellent opportunities for class discussions, class assignments, and critical thinking.

University policy on academic misconduct is broad enough to apply to problematic uses of artificial intelligence: 

Academic misconduct by a student shall include, but not be limited to, disruption of classes; threatening an instructor or fellow student in an academic setting; giving or receiving of unauthorized aid on examinations or in the preparation of notebooks, themes, reports or other assignments; knowingly misrepresenting the source of any academic work; unauthorized changing of grades; unauthorized use of University approvals or forging of signatures; falsification of research results; plagiarizing of another's work; violation of regulations or ethical codes for the treatment of human and animal subjects; or otherwise acting dishonestly in research.

Student Affairs also emphasizes to students the importance of academic integrity. It guidance says, in part:

Academic integrity is a central value in higher education. It rests on two principles: first, that academic work is represented truthfully as to its source and its accuracy, and second, that academic results are obtained by fair and authorized means. "Academic misconduct" occurs when these values are not respected. Academic misconduct at KU is defined in the University Senate Rules and Regulations.  A good rule of thumb is "if you have to ask if this is cheating, it probably is.

The questions about appropriate use of AI in academic work will no doubt linger for years. For instance, where do we set the boundaries for use of AI? If students use PowerPoint to redesign their slides, is it still their work? If they use ChatGPT to write part of a paper, is it still their paper? If they use DALL-E or other visual tools to create artwork or AI-powered tools to create music, is that really original work? If they use AI-powered summarization tools or grammar checkers, have they done the work on their own?

Those are difficult questions, and we have yet to determine where to set the boundaries of AI use in our classes or our professional lives. That makes it especially important to talk with students about your expectations and to include a statement in your syllabus about those expectations.

ChatGPT is just one of a growing number of digital tools using artificial intelligence. Those tools can summarize information, create artwork, iterate searches based on the bibliographies of articles you mark, answer questions from the perspectives of historical figures and fictional characters, turn text into audio and video, create animated avatars, analyze and enhance photos and video, create voices, and perform any number of digital tasks. AI is integrated in phones, computers, lighting systems, thermostats, and just about any digital appliance you can imagine. So the question isn't whether to use use AI; we already are, whether we realize it or not. The question is how quickly we are willing to learn to use it effectively in teaching and learning.

Additional resources

This is a brief list of articles and sites related to generative artificial intelligence and teaching. You will find additional resources in handouts for CTE sessions on AI, writing, and critical thinkingusing AI in teaching; and AI, policy, and academic integrity.

Guides, tools and sites

AI, writing, and assignment design

AI and teaching

AI and critical thinking

AI and ethics

Resources for learning about generative AI

The easiest way to get started with generative AI is to try one of the most popular tools: ChatGPT, Bing Chat, Bard, or Claude. Many other tools are more focused, though, and are worth exploring. Some of the tools below were made specifically for researchers or graduate students. Others are more broadly focused but have similar capabilities.

Where to find AI tools

  • Futurepedia. A website with an extensive list of AI tools, along with information about cost.
  • PromptBase. A marketplace for buying and selling prompts for the AI tools DALL-E, GPT-3, and Midjourney.
  • Tools based on the GPT-3 language generation model that have free options: ChatGPT, OpenAI Playground, Chatsonic.
  • OpenAI Cookbook. A repository of code and prompts for interacting with OpenAI.

Further discussions

A Google group called AI in Education has frequent discussions about generative AI and allows members to ask and answer questions.

It’s easy to get lost when reading about artificial intelligence. Knowing some of the frequently used terms can help. Sites that can provide additional definitions and information include code.org, CompTIA, and Wikipedia.

Algorithm. A set of procedures or instructions that breaks problem-solving into a series of steps.

API. Shorthand for application program interface, a method of connecting computer systems so that a receiving system draws on the abilities of the sending system. For instance, many companies have created apps that use, modify or extend the capabilities of the system behind ChatGPT.

Artificial general intelligence. A goal of creating a computer system that is smarter than humans. This is sometimes referred to as AGI.

Artificial intelligence. A broad term for computer systems that handle work often done by people.

Bard. Google’s version of a chatbot.

Bing and Bing Chat. Microsoft’s search engine, Bing, now also has a chatbot based on GPT-4, the most recent large language model created by OpenAI. Unlike ChatGPT, Bing has access to the internet.

Bot or chatbot. This generally refers to a computer interface that responds to natural language questions in ways that mimic human interaction. A bot can also be a physical robot that carries out commands or performs tasks that humans might do. 

ChatGPT. An artificial intelligence platform that creates writing, generates code, solves problems, and answers questions based on natural-language prompts and questions. It has attracted widespread interest since it was made available in the fall of 2022 because of its vast knowledge base and ability to perform tasks quickly and easily. It does not have access to the internet, but various plugins and add-ons can give it that ability.

Generative AI. A computer system that creates, or generates, text, images, or code based on information a user provides. ChatGPT, Bing Chat, DALL-E, and Bard are all forms of generative AI.

Large language models. An approach to artificial intelligence that analyzes enormous amounts of text and creates probabilities for word sequencing. This analysis is often called training. A large language model allows ChatGPT and similar tools to respond to questions in ways that sound human. They aren’t human, though. They simply string words together that mimic the patterns it has analyzed. “Large” refers to billions of words drawn from books and other digital texts. You will often see these referred to as LLMs.

GTP. A digital model that allows computer software to create natural-language text, generate computer code, and interact in a human-like way. GTP refers to generative pre-trained transformer, a means of extrapolating new results from previous training. The most recent version, GTP-4, draws on nearly all text available on the internet.

GitHub. A repository that allows users to download, modify, and deploy code and digital materials that others have created. Git refers to a form of version control that allows users to keep track of changes in digital files.

LLaMA. A large language model created by Meta, the company that owns Facebook.

Modeling. Development of a decision-making process in AI systems.

OpenAI. A company that creates digital models and databases that other organizations use for creating chatbots and other types of digital tools. Its work is at the heart of many such tools, including ChatGPT and DALL-E.

PaLM. Google’s version of a large language model.

Text classifier. Software that analyzes written work to help gauge the likelihood that it was created by artificial intelligence.