The increasing use of applications that draw on generative artificial intelligence has led to intense discussions about the growing capabilities of generative AI and how it might change jobs, disciplines, societies, and even the way we think. Artificial intelligence has been evolving for decades, but most educators have done little to consider how these new digital tools might change teaching, learning, and student expectations. For instance:
- What do we need to know about generative AI platforms and how our students might use them?
- How might AI change how we teach, what we teach, what we assign students to do, and how we assess student work?
- What ethical issues do we need to address as we consider how AI might change the way we and our students think and act?
- The algorithmic assumptions and decisions behind AI software are hidden, so how do we evaluate potential new tools and learning platforms?
There are no definitive answers to those questions, but the resources we have collected about generative AI are intended to provide guidance and ideas on how to approach these tools in your courses.
Some things to think about with AI
Talk with them about your expectations and how you will view (and grade) assignments generated solely with artificial intelligence. Emphasize the importance of learning and explain why you are having them complete the assignments you use. Why is your class structured as it is? How will they use the skills they gain?
That sort of transparency has always been important, but it is even more so now. Students intent on cheating will always cheat. Some draw from archives at greek houses, buy papers online, or have friends do the work for them. ChatGPT and similar tools can provide just another means of avoiding the work that learning requires. Helping students understand how a course helps their learning will win over some of them, as will flexibility and choices in assignments. This is also a good time to emphasize the importance of human interaction in learning, something artificial intelligence lacks.
As fluent as ChatGPT often seems, its answers rarely delve beneath the surface of a topic. It makes mistakes. It makes things up. Its responses provide no clues about how it is programmed or why it provides the answers it does. A Princeton researcher called it a "bullshit generator" because it creates plausible arguments without regard for truth.
All of that makes it a valuable teaching tool, though. By having students probe for answers, we can help them improve their skepticism, challenge assumptions, and question information. By having them fact-check, we can help them understand the dangers of fluid writing that lacks substance or that relies on fallacies. By having them use ChatGPT or other AI tools for early drafts, we can push them to ask questions about information, structure, and sources. By having them apply different perspectives to ChatGPT's results, we can help broaden their understanding of points of view and argument.
Many students are already using ChatGPT in their school work. We can no more ban students from using artificial intelligence than we can ban them from using phones or calculators or laptops. We need to talk with students about how to use ChatGPT and other AI tools effectively and ethically, though. No, they should not take AI-written materials and turn them in for assignments, but yes, they should use AI when appropriate. Businesses of all sorts are already adapting to AI, and students will need to know how to use it when they move into the workforce. Students in K-12 schools are using it and will expect access when they come to college. Rather than banning ChatGPT and other AI tools or fretting over how to police them, we need to change our practices, our assignments, and our expectations. We need to focus more on helping students iterate their writing, develop their information literacy skills, and humanize their work.
One way to help with that is reflection, which helps students develop their metacognitive skills. It can also help them understand how to integrate AI into their learning processes and how they can build and expand on what AI provides. Reflection can also help reinforce academic honesty. Rather than hiding how they completed an assignment, reflection helps students embrace transparency.
ChatGPT also requires new ways of thinking, and having students reflect on their use of it could prove valuable. For instance, ChatGPT rarely produces solid results on the first try, and it often takes several iterations of a question to get good answers. Sometimes it never provides good answers. That makes it much like web or database searching, which requires patience and persistence as you refine search terms, narrow your focus, identify specific file types, try different types of syntax and search operators, and evaluate many pages of results. Add AI to the expanding repertoire of digital literacies students need. (Teaching guides and e-books are already becoming available online.)
ChatGPT's ability to create work as good as many early undergraduates means we will have to rethink assignments and curricula. For instance:
- Create assignments in which students start with ChatGPT and then have discussions about strengths and weaknesses. Have students compare the output from AI writing platforms, critique that output, and then create strategies for building on it and improving it.
- Use multistep, scaffolded assignments with feedback and revision opportunities.
- Emphasize assignment dimensions that are (currently) difficult for AI: synthesis, student voice and opinions.
- Use project-based learning.
Anne Bruder offeres additional suggestions in Education Week, Ethan Mollick does the same on his blog, and Anna Mills has created a Google Doc with many ideas (one of a series of documents and curated resources she has made available). Paul Fyfe of North Carolina State provides perhaps the most in-depth take on the use of AI in teaching, having experimented with an earlier version of the ChatGPT model more than a year ago.
Students often struggle with questions to ask for research papers or topics to pursue for projects. ChatGPT's responses to questions are often bland and shallow, but it can also suggest ideas or solutions that aren't always apparent. It can become a partner, of sorts, in writing and problem-solving. It might suggest an outline for a project, articulate the main approaches others have taken to solving a problem, or provide summaries of articles to help decide whether to delve deeper into them. It might provide a counterargument to a position or opinion, helping strengthen an argument or point out flaws in a particular perspective. We need to help students evaluate those results just as we need to help them interpret online search results and help them interpret media of all types. ChatGPT can provide motivation for starting many types of projects, though.
ChatGPT is incapable of acknowledging its biases, although it has many. Maria Andersen, a teacher and creator of a technology startup, said that it had a white, male view of the world. Maya Ackerman of Santa Clara University told The Story Exchange: "People say the AI is sexist, but it's the world that is sexist. All the models do is reflect our world to us, like a mirror." ChatGPT has been trained to avoid hate speech, sexual content, and anything OpenAI considered toxic or harmful. Others have said that it avoids conflict, and that its deep training in English over other languages skews its perspective. Some of that will no doubt change in the coming months and years as the scope of ChatGPT expands. No matter the changes, though, ChatGPT will live in and draw from its programmers' interpretation of reality. Of course, that provides excellent opportunities for class discussions, class assignments, and critical thinking.
University policy on academic misconduct is broad enough to apply to problematic uses of artificial intelligence:
Academic misconduct by a student shall include, but not be limited to, disruption of classes; threatening an instructor or fellow student in an academic setting; giving or receiving of unauthorized aid on examinations or in the preparation of notebooks, themes, reports or other assignments; knowingly misrepresenting the source of any academic work; unauthorized changing of grades; unauthorized use of University approvals or forging of signatures; falsification of research results; plagiarizing of another's work; violation of regulations or ethical codes for the treatment of human and animal subjects; or otherwise acting dishonestly in research.
Student Affairs also emphasizes to students the importance of academic integrity. It guidance says, in part:
Academic integrity is a central value in higher education. It rests on two principles: first, that academic work is represented truthfully as to its source and its accuracy, and second, that academic results are obtained by fair and authorized means. "Academic misconduct" occurs when these values are not respected. Academic misconduct at KU is defined in the University Senate Rules and Regulations. A good rule of thumb is "if you have to ask if this is cheating, it probably is.
The questions about appropriate use of AI in academic work will no doubt linger for years. For instance, where do we set the boundaries for use of AI? If students use PowerPoint to redesign their slides, is it still their work? If they use ChatGPT to write part of a paper, is it still their paper? If they use DALL-E or other visual tools to create artwork or AI-powered tools to create music, is that really original work? If they use AI-powered summarization tools or grammar checkers, have they done the work on their own?
Those are difficult questions, and we have yet to determine where to set the boundaries of AI use in our classes or our professional lives. That makes it especially important to talk with students about your expectations and to include a statement in your syllabus about those expectations.
ChatGPT is just one of a growing number of digital tools using artificial intelligence. Those tools can summarize information, create artwork, iterate searches based on the bibliographies of articles you mark, answer questions from the perspectives of historical figures and fictional characters, turn text into audio and video, create animated avatars, analyze and enhance photos and video, create voices, and perform any number of digital tasks. AI is integrated in phones, computers, lighting systems, thermostats, and just about any digital appliance you can imagine. So the question isn't whether to use use AI; we already are, whether we realize it or not. The question is how quickly we are willing to learn to use it effectively in teaching and learning.
This is a brief list of articles and sites related to generative artificial intelligence and teaching. You will find additional resources in handouts for CTE sessions on AI, writing, and critical thinking; using AI in teaching; and AI, policy, and academic integrity.
Guides, tools and sites
- Critical AI Literacy for Educators, a Padlet curated by Kathryn Conrad and Sean Kamperman of the KU English Department. It includes excellent resources on AI literacy, ethics, plagiarism and academic misconduct, policies, potential applications, and tools.
- A Blueprint for an AI Bill of Rights for Education, by Kathryn Conrad. Critical AI blog (July 2023).
- Sentient Syllabus Project, a collaborative resource created by professors in Canada, the United States, and Japan to help instructors use artificial intelligence resources meaningfully. The site can be challenging to navigate, but it contains many thoughtful materials related to syllabus language, objectives, and activities.
- AI Text Generators and Teaching Writing: Starting Points for Inquiry, curated by Anna Mills of the College of Marin and hosted by Colorado State University. Also see:
- A Teacher’s Prompt Guide to ChatGPT, by Andrew Herft.
- The practical guide to using AI to do stuff, by Ethan Mollick of the Wharton School of Business.
- AI in Higher Education Metasite, by Ray Schroeder.
- Update Your Course Syllabus for ChatGPT, by Ryan Watkins, George Washington University.
- Classroom Policies for AI Generative Tools, compiled by Lance Eaton of College Unbound. Material from classes at several universities.
- ChatGPT Resources for Educators, compiled by Ben Erwin, Syracuse.
AI, writing, and assignment design
- Using AI to Implement Effective Assignment Design: Five Strategies, Including Prompts, by Ethan Mollick and Lilach Mollick. SSRN (24 March 2023).
- How to cheat on your final paper: Assigning AI for student writing, by Paul Fyfe. AI & Society (accepted 4 September 2022). A professor writes about an AI experiment he undertook before ChatGPT roiled education.
- Embrace the Bot: Designing Writing Assignments in the Face of AI, by Eric Prochaska, Faculty Focus (23 January 2023).
- Teaching Actual Student Writing in an AI World, by Kevin Jacob Kelley, Inside Higher Ed (19 January 2023). Offers 10 strategies for engaging students in writing projects that avoid AI.
- Worried About ChatGPT? Don’t Be, by Hetal Thaker, Inside Higher Ed (23 January 2023). Argues that AI is part of larger questions the humanities have been grappling with for years (relevance, creativity, assessment) and that it is just another reminder of why we need to keep pushing for solutions.
- ChatGPT can’t be credited as an author, says world’s largest academic publisher, by James Vincent, The Verge (26 January 2023). Springer Nature says authors are free to use AI tools as long as they disclose their use. Medium announced a similar policy and cites policies from other organizations.
- A Message to Students About ‘The Bot,’ by Julia McKenzie Munemo, Inside Higher Ed (23 January 2023). The director of a writing center argues that AI will further demotivate students who see writing as a chore and classes as mere tasks. To educators, Munemo writes: “In a world where students are taught to write like robots, a robot can write for them.”
- Why I’m Not Scared of ChatGPT, by Christopher Grobe, Chronicle Review (3 February 2023).
- ChatGPT’s Fluent BS Is Compelling Because Everything Is Fluent BS, by Amit Katwala, Wired (9 December 2022).“ChatGPT was trained on real-world text, and the real world essentially runs on fluent bullshit,” from media to business to politics to education.
AI and teaching
- Will ChatGPT Change the Way You Teach?, by Beth McMurtrie, Chronicle of Higher Education newsletter (5 January 2023).
- Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach, by Kalley Huang, New York Times (16 January 2023).
- ChatGPT Advice Academics Can Use Now, by Susan D’Agostino, Inside Higher Ed (12 January 2023). Thoughts from a variety of educators.
- Is ChatGPT a threat to education?, by Iqbal Pittalwala, UC Riverside News (24 January 2023). Instructors in computer science, biology, education policy, creative writing, physics and astronomy, economics, and public health offer their views.
- Designing Assignments in the ChatGPT Era, by Susan D’Agostino, Inside Higher Ed (31 January 2023).
- With ChatGPT, We’re All Editors Now, by Rachel Elliott Rigolino, Inside Higher Ed (31 January 2023). Argues that AI has increased the need for critical editing.
- Can ChatGPT fill in mentorship gaps for Gen Z workers, by Cloey Callahan, Worklife (27 January 2023).
- The bots are here to stay. Do we deny or do we adapt?, by Doug Ward, Bloom’s Sixth (20 January 2023).\
AI and critical thinking
- Conversation-Stopper: Will the development of AI tools make us less intelligent?, by John Symons, Return (26 July 2022).
- How ChatGPT robs students of motivation to write and think for themselves, by Naomi S. Baron, The Conversation (19 January 2023). A linguist argues that writing is a human process and that AI tools diminish students’ ability to improve their thinking.
- How to Teach Critical Thinking, by Daniel T. Willingham, Education: Future Frontiers (May 2019). A report commissioned by Australian educators provides a broad literature review of critical thinking research. That research suggests that critical thinking is discipline specific, even though we often consider it a general skill. The Hechinger Report also wrote about Willingham’s work.
- An AI toolkit for libraries, by Michael Upshawk, UKSG Insights (1 November 2022). This article, published in a libraries journal, provides an excellent overview of how AI tools work and how we can evaluate them for potential use.
AI and ethics
- Teaching AI Ethics, by Leon Furze. An Australian educator’s excellent guide on the many facets of AI in education. Contains discussion questions and activities for various disciplines.
- Ethical guidelines on the use of AI in education, European Commission (25 October 2022).
- Australia’s AI Ethics Principles, Department of Industry, Science and Resources. This is focused more broadly than education, but educators may find many of the points useful for discussion.
Resources for learning about generative AI
- Elements of AI and Ethics of AI, two free online courses created by the University of Helsinki. A course called Practical Data Ethics, created by Rachel Thomas, a professor of practice at Queensland University of Technology, also explores the ethics of AI.
- Generative AI learning path. A series of free courses from Google.
The easiest way to get started with generative AI is to try one of the most popular tools: ChatGPT, Bing Chat, Bard, or Claude. Many other tools are more focused, though, and are worth exploring. Some of the tools below were made specifically for researchers or graduate students. Others are more broadly focused but have similar capabilities.
- Research assistants like Research Rabbit and ARIA (an add-on to Zotero), Litmaps, Connected Papers and ClioVis are helping scholars find and organize literature, visualize connections among ideas, share collections of research, and keep up with new research.
- Search tools like Consensus, Elicit, Scite, SciSpace, Snowball, System Pro, You, Metaphor, and Inciteful help researchers focus on specific types of literature and use AI to guide them toward appropriate materials.
- Tools for interacting with articles give researchers a new way to ask questions about existing materials and to find ideas or connections they might have overlooked. These include Explainpaper, which allows you to highlight text and ask for explanations; Humata; PaperBrain; Sharly, which has a generous free version; ChatPDF; MapDeduce; Census GPT, which connects to U.S. Census data; Upword; OpenRead; PDF.ai; Kadoa; and Cerelyze, which allows you interact with articles and translate method into code.
- Summary tools provide quick synopses of articles, papers, and books, making it easier to work through large amounts of literature. These include Scholarcy, which highlights key areas of academic articles and allows you to build a personal library; Lateral; MapDeduce; ShortForm.ai, which uses a browser plugin; TinyWow, which offers a suite of free AI-powered tools; Summarize.tech and Eightify, which summarize YouTube videos; and BooksAI, which provides summaries of books.
- Several data tools allow researchers to query their quantitative data with natural language questions. These include Julius;Formulas HQ; Chatwithdata; Chatcsv; ChatNode; Tomat; Chartify; DataSquirrel; Chaindesk; and JADBio, which focuses on genomics, medicine, health, and drug discovery. Web-scraping tools like Simplescraper help automate time-consuming data-gathering.
- Writing tools like Jenni,Paperpal, Writefull, and Wisio promise to make academic writing easier.
- Members of the ChatGPT community forumoffer offer many ideas on how to use ChatGPT, as do members of the OpenAI Discord forum. (If you’ve never used Discord, here’s a guide for getting started.)
- For coding, GitHub Copilot has become an essential tool.
Where to find AI tools
- Futurepedia. A website with an extensive list of AI tools, along with information about cost.
- PromptBase. A marketplace for buying and selling prompts for the AI tools DALL-E, GPT-3, and Midjourney.
- Tools based on the GPT-3 language generation model that have free options: ChatGPT, OpenAI Playground, Chatsonic.
- OpenAI Cookbook. A repository of code and prompts for interacting with OpenAI.
A Google group called AI in Education has frequent discussions about generative AI and allows members to ask and answer questions.
It’s easy to get lost when reading about artificial intelligence. Knowing some of the frequently used terms can help. Sites that can provide additional definitions and information include code.org, CompTIA, and Wikipedia.
Algorithm. A set of procedures or instructions that breaks problem-solving into a series of steps.
API. Shorthand for application program interface, a method of connecting computer systems so that a receiving system draws on the abilities of the sending system. For instance, many companies have created apps that use, modify or extend the capabilities of the system behind ChatGPT.
Artificial general intelligence. A goal of creating a computer system that is smarter than humans. This is sometimes referred to as AGI.
Artificial intelligence. A broad term for computer systems that handle work often done by people.
Bard. Google’s version of a chatbot.
Bing and Bing Chat. Microsoft’s search engine, Bing, now also has a chatbot based on GPT-4, the most recent large language model created by OpenAI. Unlike ChatGPT, Bing has access to the internet.
Bot or chatbot. This generally refers to a computer interface that responds to natural language questions in ways that mimic human interaction. A bot can also be a physical robot that carries out commands or performs tasks that humans might do.
ChatGPT. An artificial intelligence platform that creates writing, generates code, solves problems, and answers questions based on natural-language prompts and questions. It has attracted widespread interest since it was made available in the fall of 2022 because of its vast knowledge base and ability to perform tasks quickly and easily. It does not have access to the internet, but various plugins and add-ons can give it that ability.
Generative AI. A computer system that creates, or generates, text, images, or code based on information a user provides. ChatGPT, Bing Chat, DALL-E, and Bard are all forms of generative AI.
Large language models. An approach to artificial intelligence that analyzes enormous amounts of text and creates probabilities for word sequencing. This analysis is often called training. A large language model allows ChatGPT and similar tools to respond to questions in ways that sound human. They aren’t human, though. They simply string words together that mimic the patterns it has analyzed. “Large” refers to billions of words drawn from books and other digital texts. You will often see these referred to as LLMs.
GTP. A digital model that allows computer software to create natural-language text, generate computer code, and interact in a human-like way. GTP refers to generative pre-trained transformer, a means of extrapolating new results from previous training. The most recent version, GTP-4, draws on nearly all text available on the internet.
GitHub. A repository that allows users to download, modify, and deploy code and digital materials that others have created. Git refers to a form of version control that allows users to keep track of changes in digital files.
LLaMA. A large language model created by Meta, the company that owns Facebook.
Modeling. Development of a decision-making process in AI systems.
OpenAI. A company that creates digital models and databases that other organizations use for creating chatbots and other types of digital tools. Its work is at the heart of many such tools, including ChatGPT and DALL-E.
PaLM. Google’s version of a large language model.
Text classifier. Software that analyzes written work to help gauge the likelihood that it was created by artificial intelligence.