This content is being reviewed in light of recent changes to federal guidance.

artificial intelligence


How a new Copilot tool might be used in teaching


How a new Copilot tool might be used in teaching

 

By Doug Ward

The KU version of Copilot now allows the creation of agents, which means you can customize Copilot and give it instructions on what you want it to do, how you want it to respond, and what format its output should follow. 

An agent still uses Copilot’s foundational training, but the instructions can reduce the need for long, complex prompts and speed up tasks you perform regularly. You can also direct the agent to websites you would like it to draw on, and create starter prompts for users.

Copilot has also gained another function: the ability to store prompts for reuse. That isn’t nearly as useful as creating agents, but both additions give users additional control over Copilot and should make it more useful for many faculty members, staff members, and graduate students. (I don’t know whether the new functions are available to undergraduates, but they probably are.)

These features have been available for some time in paid versions of Copilot. What is new is the access available when you use your KU credentials to log in to Copilot, which is Microsoft’s main generative artificial intelligence tool.

Professor touches computer screen as letters emerge from the screen

Potential and limitations 

Agents have the potential to improve the accuracy of responses of Copilot because the directions you provide limit the scope of Copilot’s actions and tailor the tone and substance of those responses. Accuracy also improves if you give Copilot examples and specific material to work with (an uploaded document, for instance).

If you log in with your KU ID, Copilot also has additional layers of data protection. For instance, material you use in Copilot isn’t used for training of large language models. It is also covered by the same privacy protections that KU users have with such tools as Outlook and OneDrive.

In addition to potential, Copilot has several limitations. Those include:

  • Customization restrictions. A Copilot agent allows you to provide up to 8,000 characters, or about 1,500 words, of guidance. That guidance is essentially an extended prompt created with natural language, but it includes any examples you provide or specific information you want your agent to draw on. The 8,000 characters may seem substantial, but that count dwindles quickly if you provide examples and specific instructions. 
  • Input restrictions. Once you create an agent, Copilot also has an input limit of 8,000 characters. That includes a prompt and whatever material you want Copilot to work with. If you have given your agent substantial instructions, you shouldn’t need much of a prompt, so you should be able to upload a document of about 1,500 words, a spreadsheet with 800 cells, or a PowerPoint file with eight to 16 slides. (Those are just estimates.) The limit on code files will vary depending on the language and the volume of documentation and comments. For instance, Python, Java and HTML will use up the character count more quickly. The upshot is that you can’t use a Copilot agent to analyze long, complex material – at least in the version we have at KU. (The 8,000-character limit is the same whether you use an agent or use a prompt with Copilot itself.)
  • Limit in scope. Tools like NotebookLM allow you to analyze dozens of documents at once. I haven’t found a way to do that with a Copilot agent. Similarly, I haven’t found a way to create a serial analysis of materials. For instance, there’s no way to give Copilot several documents and ask it to provide individual feedback on each. You have to load one document at a time, and each document must fall within the limits I list above. 
  • Potential fabrication. The guidance you provide to a Copilot agent doesn’t eliminate the risk of fabrication. All material created by generative AI models may include fabricated material and fabricated sources. They also have inherent biases because of the way they are trained. It is crucial to examine all AI output closely. Ultimately, anything you create or do with generative AI is only as good as your critical evaluation of that material.

An example of what you might do

I have been working with the Kansas Law Enforcement Training Center, a branch of KU that provides training for officers across the state. It is located near Hutchinson.

One component of the center’s training involves guiding officers in writing case reports. Those reports provide brief accounts of crimes or interactions an officer has after being dispatched. They are intended to be factual and accurate. At the training center, officers write practice reports, and center staff members provide feedback. This often involves dozens of reports at a time, and the staff wanted to see whether generative AI could help with the process.

Officers have the same challenges as all writers: spelling, punctuation, grammar, consistency, and other structural issues. Those issues provided the basis for a Copilot agent I created. That agent allows the staff to upload a paper and, with a short prompt, have Copilot generate feedback. A shareable link allows any of the staff members to use the agent, improving the consistency of feedback. The agent is still in experimental stages, but it has the potential to save the staff many hours they can use for interacting with officers or working with other aspects of training. It should also allow them to provide feedback much more quickly.

Importantly, the Copilot agent keeps the staff member in control. It creates a draft that the staff member can edit or expand on before providing feedback to the officer. That is, Copilot provides a starting point, but the staff members must draw on their own expertise to evaluate that output and decide what would be useful to the officer.

Other potential uses

If you aren’t sure whether you could use a Copilot agent in your teaching-related work, consider how you might use a personal assistant who helps with your class. What areas do students struggle with? What do they need help with when you aren’t available? What do they need more practice with? How can you help students brainstorm and refine ideas for projects and papers? What aspects of your class need to be re-envisioned? What tasks might you give an assistant to free up your time?

For instance, a CTE graduate fellow hopes to create an agent to help students learn MLA and APA style. I have written previously about how Copilot can be used as a coach for research projects. Many faculty members at the University of Sydney have created agents for such tasks as tutoring, skill development, and feedback to students. Their agents have been used to help students in large classes prepare for exams; help faculty create case studies and provide feedback on student work; help students troubleshoot problems, improve grammar skills, practice interviewing, better understand lecture content, create research proposals, and get answers to general questions about a class when an instructor isn’t available. Those faculty members are in fields such as biology, occupational therapy, biochemistry, education, social work, psychology, nursing, and journalism. 

Some of the examples at the University of Sydney may be difficult for KU faculty to emulate because Sydney has a custom-built system called Cogniti. That system uses Copilot agents but has more sophisticated tools than KU has. Microsoft has also created many types of agents. As with the examples from Sydney, some are beyond the capabilities of the system we have access to at KU, but they can give you a sense of what is possible.

If you decide to create your own agent, I explain in a separate article and video how you can do that. My goal is to help instructors explore ways to use generative artificial intelligence proactively rather than feel like they are constantly fighting against its misuse. If nothing else, creating guidance for an agent can help you better articulate steps students can take to improve their learning and identify areas of your class you might want to improve.


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

Some thoughts about generative AI as the semester starts


Some thoughts about generative AI as the semester starts

By Doug Ward

The shock has worn off, but the questions about how to handle generative artificial intelligence in teaching and learning seem only to grow.
Those questions lack easy answers, but there are concrete steps you can take as we head into the third year of a ChatGPT world:

  • Create a clear policy about generative AI use in your class.
  • Talk with students about generative AI frequently. Encourage questions.
  • Talk frequently about the skills students gain in your class. Explain why those skills are important and how students can use them. Do this early in the semester and then with each assignment.
  • Build community and trust in your classes. Student use of generative AI is a symptom of underlying issues of trust, perceptions of value, and social pressures, among other things.
  • Create assignments that help students explore generative AI. You don't have to like or promote generative AI, but students need to understand its strengths and weaknesses, and how to approach its output with a critical eye.  
  • Experiment with generative AI yourself and consider how it is – or might – change your discipline and your teaching.

That’s just a start. As I said, the questions about generative AI keep piling up. Here are a few additional updates, thoughts, and observations.A university building embedded in a hill made of computer chips and diodes, with Q&A rising from the top

What is the university doing with AI?

Several things have been taking place, and there are many opportunities to learn more about generative AI.

  • AI Task Force. A task force that includes members of the Lawrence and medical school campuses began work in the fall. It will make recommendations on how the university might approach generative AI. It will then be up to university leaders and faculty and university governance to decide what types of policies (if any) to pursue.
  • Faculty Senate and University Senate. Both governance bodies have had discussions about generative AI, but no formal policies have emerged.
  • University bot. The university has contracted with a vendor to provide a chatbot for the KU website. The bot is still being developed, but vendor interviews focused on such uses as interacting with prospective students, responding to text queries from students, providing reminders to students, and answering questions related to IT and enrollment management.
  • AI in Teaching Working Group. This group, through the Center for Teaching Excellence, meets monthly online, and it has a related Teams site. If you are interested in joining either, email Doug Ward (dbward@ku.edu).
  • AI think tank. Lisa Dieker (lisa.dieker@ku.edu) has organized the AI Tech User Think Tank through the FLITE Center in the School of Education and Human Sciences. It is intended primarily for connecting faculty interested in AI-related grant work and research, but meetings cover many types of AI-related issues. Contact her if you are interested in joining.
  • Digital Education Council. The School of Education and Human Sciences has joined the Digital Education Council, an international group of universities and corporations focused on collaborative innovation and technology. Much of the group’s recent work has focused on use of generative AI in education and industry.
  • Libraries AI discussion group.  The KU Libraries staff has been actively exploring how generative AI might change the way people search, find, and use information. A Teams discussion site has been part of that. Most conversations are, of course, library related, but participants often share general information about AI or about library trials.
  • CTE AI course. CTE has made AI-related modules available for instructors to copy, use, or adapt in their own courses. The modules cover such areas as how generative AI works, why it creates many ethical quandaries, how it can be used ethically, and what the future of AI might entail. Anyone interested in gaining access to the modules should email Doug Ward (dbward@ku.edu). 

What about a policy for classes?

The university has no policy related to AI use in classes, and we know of no policy at the school level, either. That means it is crucial for instructors to talk with students about expectations on AI use and to include syllabus information about use of, or prohibitions on, generative AI.

We can’t emphasize that enough: Talk with students about generative AI. Encourage them to ask questions. Make it clear that you welcome those questions. No matter your policy on use of generative AI, help students understand what skills they will gain from your class and from each assignment.  (See Maintaining academic integrity in the AI era.)  

What are we hearing about AI use among students?

Students have been conflicted about generative AI. Some see use of it as cheating. Some view the training of generative AI on copyrighted material as theft of intellectual property. Some worry about privacy and bias. Others worry about AI’s environmental impact. 

Even so, large percentages of students say they use generative AI in their coursework, even if instructors ask them not to. They expect faculty to adapt to generative AI and to help them learn how to use it in jobs and careers. For the most part, that hasn’t happened, though. 

Most students welcome the opportunity to talk about generative AI, but many are reluctant to do so out of fear that instructors will accuse them of cheating. That has to change. Only by engaging students in discussions about generative AI can we find a way forward.

Why are so many students using generative AI?

Many instructors assume students are lazy and want to cheat. The reality is far more complex. Yes, some avoid the hard work of learning. Most, though, use generative AI for other reasons, which include the following:

  • Students feel unprepared. Many students struggled during the pandemic. Expectations of them diminished, and many never gained the core reading, writing, math, and analytical skills they need in college. College requirements and expectations have largely remained the same, though, with students unsure how to cope. Generative AI has become a way to make up for shortcomings. 
  • They feel overwhelmed. Some students have families or other obligations, many work 20 or more hours a week, and most still feel lingering effects from the pandemic. Anxiety, depression, and related mental health issues have increased. That mix pushes many students to take shortcuts just to get by. 
  • They feel pressured to achieve high GPAs. Scholarships often require a 3.5 GPA or higher, and students who want to attend graduate school or medical school feel a need to maintain high GPAs. That can push them toward AI use if they fear falling below whatever benchmark they have set for themselves or that others have imposed on them. 
  • They lack skills in time management. Students who wait until the last minute to study or to complete assignments create unnecessary stress for themselves. They also find out that assignments can’t be completed at the last minute, and they turn to AI for help.
  • They worry about job expectations. Students have been getting mixed messages about generative AI. Some instructors denounce it and see any use of it as cheating. At the same time, many employers say they expect graduates to know how to use it. Current students are especially job-oriented. Depending on what they hear and read, they may see experience with generative AI as more important than skills they would gain by doing coursework themselves. 
  • They see a degree as a consumer product. As the cost of college has increased, many students have started looking at a degree in transactional terms. A degree is simply a means to a job. They are paying a lot of money, the reasoning goes, and that should give them the right to use whatever tools they want to use and to approach class in whatever way helps them succeed.
  • They don’t see value in an assignment or class. This is a big aspect of most types of academic misconduct. Most students want to learn, but they don’t always understand why they must take particular classes or complete some assignments. If students don’t see value in an assignment or a class, they may just turn over any work to generative AI.
  • Generative AI is ubiquitous. We have all been using some form of AI for years, and generative AI is being added to most software. It often seems impossible to avoid, and many students already see it as just another tool at their disposal. 

None of that should be seen as an excuse to cheat, but instructors need to keep it in mind as they structure their classes. Here are some strategies they can use:

  • Never assume students have particular skills. Even if they had these skills in the past, they may not today. Provide opportunities for review or for practicing and learning crucial skills.
  • Scaffold assignments. Assign papers or projects in smaller segments so that students can get feedback and revise their work. Avoid high-stakes tests and use low-stakes quizzes and exams instead.
  • Make expectations transparent. Explain the skills students need to succeed and the expectations you have for successful completion of assignments. Provide examples. Provide resources for reviewing or learning expected skills. Encourage students to talk with you privately if they feel ill-prepared to handle an assignment.
  • Provide flexibility. This might involve dropping the lowest grades in segments of coursework,  giving students, say, two opportunities to turn in work a few days after a deadline, or leniency if they communicate with you about challenges they are having. Most students still need structure, but flexibility will diminish pressure on them and on you.

Can’t detectors solve problems of student AI use?

Results from AI detectors are not proof that a student has cheated. Detection tools have proved unreliable, and students can fool them with minimal effort. Research has been consistent that AI detectors are far more likely to flag the work of students for whom English is not a first language. Because of those problems, many universities have disabled AI detectors. Faculty at KU have access to a Turnitin detector. If you use it, use the results as information for talking with students.

What are you hearing from the Generative AI in Teaching Working Group?

Discussions have been robust and energizing, covering such topics as creating policies about student use of generative AI, integrating generative AI into courses, finding and experimenting with news tools, helping students learn about the pitfalls and biases of AI tools, and finding ways to expand learning with generative AI. Members share news from conferences, colleagues, readings, and research. They offer ideas, insights, recommendations, and support. Here are a few things that come up frequently. 

  • More instructors need to experiment. Most members of the working group have been experimenting with generative AI tools and with ways to integrate them into teaching and learning. That can sometimes feel isolating, especially if colleagues are unwilling to learn about generative AI or to acknowledge student use of it. 
  • We need access to tools. Many people who want to explore feel held back by lack of access to appropriate tools and by the university’s slow response to generative AI. 
  • We need time and resources. Experimenting with generative AI takes time, and integrating it into classes takes more time, especially because many long-used approaches need to be rethought.
  • We need to help students learn about generative AI. Most are uninformed and use generative AI with little or no understanding of how to prompt effectively, how to scrutinize the output of chatbots, or how to choose appropriate tools. Students need to know about how businesses are using AI tools and how to prompt chatbots effectively. The university needs courses that delve into the ethical aspects of generative AI, including privacy, safety, and appropriate use. Existing classes also need to integrate these topics.

Will the students of the near future be able to get a meaningful education without AI assistance?

Absolutely. The human aspects of education remain as crucial as ever. A college education provides access to a community of learners, interaction with professors and peers, support from experienced staff and advisors, and an opportunity to explore the outer and inner worlds with a diverse group of people.

But ...

We are still trying to figure out how and where generative AI fits into human-centered learning. People often associate generative AI with ChatGPT, but thousands of specialty tools have been developed. Many of those tools show promise in education, including those that adapt to individual students, provide opportunities to practice skills, provide feedback and direction, and allow people to explore topics they might never have considered before. We have only scratched the surface on how generative AI might transform teaching and learning.

One thing is clear: Students who hand off the challenging aspects of learning to generative AI will eventually fail. Learning takes work. It’s hard. It takes time and practice. Generative AI may be able to assist at times, but it can’t learn for students.

************************

Doug Ward is associate director of the Center for Teaching Excellence and an associate professor journalism and mass communications. 

How K-12 education connects to AI literacy in college


How K-12 education connects to AI literacy in college

By Doug Ward

Kansas ranks near the bottom in the percentage of schools offering foundational computer science education, according to a study by Code.org, the Computer Science Teacher Association, and the Expanding Computing Education Pathways Alliance.

Nationwide, 57.5% of schools offered a computer science class in 2023. Kansas was more than 20 percentage points below that average, with 36% of schools offering a foundational course. Only three states had lower percentages: Louisiana (35%), Montana (34%) and Minnesota (28%).

That has important implications for higher education. Many Kansas students who attend KU may have little understanding of how generative artificial intelligence and the large language models behind it work. That puts them at a disadvantage in understanding how to use generative AI effectively and how to approach it critically. Computer science courses aren't the only way students can learn about generative AI, but a growing number of states see those courses as crucial to the future.  

Shuchi Grover, director of AI and education research at Looking Glass Ventures, delved into that at a recent speech at the National Academies of Sciences, Engineering, and Medicine.

“You want children to be equipped with understanding the world they live in,” Grover said. “Think about how much technology is all around them. Is it wise to completely leave them in the dark about what computing and AI is about?”

Green
From 2023 State of Computer Science Education: https://advocacy.code.org/stateofcs/

 

More than 10,000 schools nationwide do not offer a computer science course, the Code.org report says. Not surprisingly, schools with 500 students or fewer are the least likely to offer such a course, as are rural schools (which are often the same). The report noted a disparity in access for students of color, students with disabilities, and students who come from low-income families. Young women represented only 31% of students enrolled in foundational computer science courses.

Like Grover, the authors of the Code.org study make a compelling point about the connection between computer science and generative AI. The report says (in bold): “We cannot prepare students for a future with AI without teaching them the foundations of computer science.”

I'm all in favor of teaching digital literacy, computer literacy, and AI literacy. Students can learn those skills in many ways, though. Requiring a computer science seems less important than providing opportunities for students to explore computer science and improve their understanding of the digital world.  

Efficiency vs. creativity

A couple of other elements of Grover’s talk at the National Academies are worth noting.

An audience member said that generative AI was generally portrayed in one of two ways: using it to do existing things better (efficiency) or to approach new problems in new ways (“to do better things”). Most studies have focused on efficiency, he said, to the exclusion of how we might apply generative AI to global challenges.

Grover said that she thought we definitely needed to focus on bigger issues. Efficiency has a role, though.

“This idea of efficiency in the school system is fraught,” Grover said. “Time fills up no matter how many efficiency tools you give them. And I think it’s unfair. Teachers all over the world, especially in the U.S. and I also see in India, are so overworked. ... I think it’s good that AI can help them with productivity and doing some of that drudgery – you know, the work that just fills up too much time – and take that off their plate.”

Schools in the United States have been slow to respond to generative AI, she said, because the system is so decentralized. Before the use and understanding of generative AI can spread, she said, “a teacher has to be able to use it and has to be able to see value.”

That will require listening.

“I think we need to listen to teachers – a lot. And maybe there’s something we can learn about where we need to focus our efforts. … Teachers need to have a voice in this – a big voice.”

Briefly …

Cheap AI ‘video scraping’ can now extract data from any screen recording, by Benj Edwards. Ars Technica (17 October 2024).

Stanford Researchers Use AI to Simulate Clinical Reasoning, by Abby Sourwine. Government Technology (10 October 2024).

Forget chat. AI that can hear, see and click is already here, by Melissa Heikkilä. MIT Technology Review (8 October 2024).

Colleges begin to reimagine learning in an AI world, by Beth McMurtrie. Chronicle of Higher Education (3 October 2024).

Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating, by Benj Edwards. Ars Technica (20 September 2024).

United Nations wants to treat AI with the same urgency as climate change, by Will Knight, Wired, via Ars Technica (20 September 2024).

Where might AI lead us? An analogy offers one possibility


Where might AI lead us? An analogy offers one possibility

By Doug Ward

As I prepared to speak to undergraduates about generative artificial intelligence last October, I struggled with analogies to explain large language models.

Those models are central to the abilities of generative AI. They have analyzed billions of words, billions of lines of code, and hundreds of millions of images. That training allows them to predict sequences of words, generate computer code and images, and create coherent narratives at speeds humans cannot match. Even programmers don’t fully understand why large language models do what they do, though.

So how could I explain those models for an audience of novices?

The path I took in creating an analogy illustrates the strengths and weaknesses of generative AI. It also illustrates a scenario that is likely to become increasingly common in the future: similar ideas developed and shared simultaneously. As those similar ideas emerge in many places at once, the role of individuals in developing those ideas will also grow increasingly important – through understanding of writing, coding, visual communication, context, and humanity.

AI-generated image of an AI cookbook

Getting input from generative AI

In my quest for an analogy last fall, I turned to Microsoft Copilot for help. I prompted Copilot to act as an expert in computer programming and large language models and to explain how those models work. My audience was university undergraduates, and I asked for an analogy to help non-experts better understand what goes on behind the scenes as generative AI processes requests. Copilot gave me this:

Generative AI is like a chef that uses knowledge from a vast array of recipes to create entirely new and unique dishes. Each dish is influenced by past knowledge but is a fresh creation designed to satisfy a specific request or prompt.

I liked that and decided to adapt it. I used the generative tool Dall-E to create images of a generative AI cookbook, a chef in a futuristic kitchen, and food displayed on computer-chip plates. I also created explanations for the steps my large language model chef takes in creating generative dishes.

How a large language model chef works

Within this post, you will see the images I generated. Here’s the text I used (again modified from Copilot’s output):

A chef memorizes an enormous cookbook (a dataset) so that it knows how ingredients (words, images, code) are usually put together.

Someone asks for a particular dish with special ingredients (a prompt), so the chef creates something new based on everything it has memorized from the cookbook.

The chef tastes the creation and makes sure it follows guidance from the cookbook.

Once the chef is satisfied, it arranges the creation on a plate for serving. (With generative AI, this might be words, images or code.)

The chef’s patrons taste the food and provide feedback. The chef makes adjustments and sends the dish back to patrons. The chef also remembers patrons’ responses and the revisions to the dish so that next time the dish can be improved.

AI-generated image of a chef and a small robot in a futuristic kitchen

A striking similarity

I explain all that because I came across the same analogy in Ethan Mollick’s book Co-intelligence. Mollick is a professor at the University of Pennsylvania whose newsletter and other writings have been must-reads over the past two years because of his experimentations with generative AI, his early access to new tools, and his connections to the AI industry.

In the first chapter of Co-intelligence, Mollick provides some history of AI development and the transformer technology and neural networks that make generative AI possible. He then explains the workings of large language models, writing:

Imagine an LLM as a diligent apprentice chef who aspires to become a master chef. To learn the culinary arts, the apprentice starts by reading and studying a vast collection of recipes from around the world. Each recipe represents a piece of text with various ingredients symbolizing words and phrases.The goal of the apprentice is to understand how to combine different ingredients (words) to create a delicious dish (coherent text).

In developing that analogy, Mollick goes into much more detail than I did and applies well-crafted nuance. The same analogy that helped me explain large language models to undergraduates, though, helped Mollick explain those models to a broader, more diverse audience. Our analogies had another similarity: They emerged independently from the same tool (presumably Microsoft Copilot) about the same time (mid- to late 2023).

Why does this matter?

I don’t know for certain that Mollick’s analogy originated in Copilot, but it seems likely given his openness about using Copilot and other generative AI tools to assist in writing, coding, and analysis. He requires use of generative AI in his entrepreneurship classes, and he writes frequently about his experimentations. In the acknowledgements of his book, he gives a lighthearted nod to generative AI, writing:

And because AI is not a person but a tool, I will not be thanking any LLMs that played a role in the creation of this book, any more than I would thank Microsoft Word. At the same time, in case some super-intelligent future AI is reading these words, I would like to acknowledge that AI is extremely helpful and should remember to be kind to the humans who created it (and especially to the ones who wrote books about it).

It was a nice non-credit that acknowledged the growing role of generative AI in human society. 

I understand why many people use generative AI for writing. Good writing takes time, and generative AI can speed up the process. As Mollick said, it’s a tool. As with any new tool, we are still getting used to how it works, what it can do, and when we should use it. We are grappling with the proprieties of its use, the ethical implications, and the potential impact on how we work and think. (I’m purposely avoiding the impact on education; you will find much more of that in my other writings about AI.)

I generally don’t use generative AI for writing, although I occasionally draw on it for examples (as I did with the presentation) and outlines for reports and similar documents. That’s a matter of choice but also habit. I have been a writer and editor my entire adult life. It’s who I am. I trust my instincts and my experience. I’m also a better writer than any generative AI system – at least for now.

I see no problem in the example that Mollick and I created independently, though. The AI tool offered a suggestion when we needed one and allowed us to better inform our respective audiences. It just happened to create similar examples. It was up to us to decide how – or whether – to use them.

AI-generated image of food on a computer-chip plate

Where to now?

Generative AI systems work by prediction, with some randomness. The advice and ideas will be slightly different for each person and each use. Even so, the systems’ training and algorithms hew toward the mean. That is, the writing they produce follows patterns the large language model identifies as the most common and most likely based on what millions of people have written in the past. That’s good in that the writing follows structural and grammatical norms that help us communicate. It is also a central reason generative AI has become so widely used in the past two years, with AI drawing on norms that have helped millions of people improve their writing. The downside is that the generated writing often has a generic tone, devoid of voice and inflection.

Research suggests that the same thing happens with ideas generative AI provides. For example, a study in Science Advances suggests that generative AI can improve creativity in writing but that stories in which writers use generative AI for ideas have a sameness to them. The authors suggest that overuse of generative AI could eventually lead to a generic quality in AI-supported stories.

My takeaway is that use of generative AI in writing comes with a cognitive and creative cost. We may get better writing, and research so far suggests that the weakest writers benefit the most from AI’s advice. Other research suggests that use of generative AI can make writing more enjoyable for weaker writers. On the other hand, a recent study suggests that human-written work is still perceived as superior to that produced by generative AI.

Mollick argues that generative AI can be an excellent partner in writing, coding, and creative work, providing a nudge, pointing the way or reassuring us in tasks that inevitably lead to inspirational lulls, dead ends, and uncertainty. The title of his book, Co-intelligence, represents his assertion that AI can augment what we do but that we, as humans, are still in control.

That control means that writing with a strong voice and uniquely human perspective still stands out from the crowd, as do ideas that push boundaries. Even so, I expect to see similar ideas and analogies emerging more frequently from different people in different places and shared simultaneously. That will no doubt lead to conflicts and accusations. As generative AI points us toward similar ideas, though, the role of individuals will also grow increasingly important. That is, what generative AI produces will be less significant than how individuals shape that output.   

Why talking about AI has become like talking about sex


Why talking about AI has become like talking about sex

By Doug Ward

We need to talk.

Yes, the conversation will make you uncomfortable. It’s important, though. Your students need your guidance, and if you avoid talking about this, they will act anyway – usually in unsafe ways that could have embarrassing and potentially harmful consequences.

So yes, we need to talk about generative artificial intelligence.

"Students gasp as professor stands at front of room and shows a lifelike robot"

Consider the conversation analogous to a parent’s conversation with a teenager about sex. Susan Marshall, a teaching professor in psychology, made that wonderful analogy recently in the CTE Online Working Group, and it seems to perfectly capture faculty members’ reluctance to talk about generative AI.

Like other faculty members, Marshall has found that AI creates solid answers to questions she poses on assignments, quizzes, and exams. That, she said, makes her feel like she shouldn't talk about generative AI with students because more information might encourage cheating. She knows that is silly, she said, but talking about AI seems as difficult as talking about condom use.

It can, but as Marshall said, we simply must have those conversations.

Sex ed, AI ed

Having frank conversations with teenagers about sex, sexually transmitted diseases, and birth control can seem like encouragement to go out and do whatever they feel like doing. Talking with teens about sex, though, does not increase their likelihood of having sex. Just the opposite. As the CDC reports: “Studies have shown that teens who report talking with their parents about sex are more likely to delay having sex and to use condoms when they do have sex.”

Similarly, researchers have found that generative AI has not increased cheating. (I haven't found any research on talking about AI.)

That hasn't assuaged concern among faculty members. A recent Chronicle of Higher Education headline captures the prevailing mood: “ChatGPT Has Everyone Freaking Out About Cheating.”

When we freak out, we often make bad decisions. So rather than talking with students about generative AI or adding material about the ethics of generative AI, many faculty members chose to ignore it. Or ban it. Or use AI detectors as a hammer to punish work that seems suspicious.

All that has done is make students reluctant to talk about AI. Many of them still use it. The detectors, which were never intended as evidence of cheating and which have been shown to have biases toward some students, have also led to dubious accusations of academic misconduct. Not surprisingly, that has made students further reluctant to talk about AI or even to ask questions about AI policies, lest the instructor single them out as potential cheaters.

Without solid information or guidance, students talk to their peers about AI. Or they look up information online about how to use AI on assignments. Or they simply create accounts and, often oblivious and unprotected, experiment with generative AI on their own.

So yes, we need to talk. We need to talk with students about the strengths and weaknesses of generative AI. We need to talk about the ethics of generative AI. We need to talk about privacy and responsibility. We need to talk about skills and learning. We need to talk about why we are doing what we are doing in our classes and how it relates to students’ future.

If you aren’t sure how to talk with students about AI, draw on the many resources we have made available. Encourage students to ask questions about AI use in class. Make it clear when they may or may not use generative AI on assignments. Talk about AI often. Take away the stigma. Encourage forthright discussions.

Yes, that may make you and students uncomfortable at times. Have the talk anyway. Silence serves no one.

JSTOR offers assistance from generative AI

Ithaka S+R has released a generative AI research tool for its JSTOR database. The tool, which is in beta testing, summarizes and highlights key areas of documents, and allows users to ask questions about content. It also suggests related materials to consider. You can read more about the tool in an FAQ section on the JSTOR site.

Useful lists of AI-related tools for academia

While we are talking about Ithaka S+R, the organization has created an excellent overview of AI-related tools for higher education, assigning them to one of three categories: discovery, understanding, and creation. It also provides much the same information in list form on its site and on a Google Doc. In the overview, an Ithaka analyst and a program manager offer an interesting take on the future of generative AI:

These tools point towards a future in which the distinction between the initial act of identifying and accessing relevant sources and the subsequent work of reading and digesting those sources is irretrievably blurred if not rendered irrelevant. For organizations providing access to paywalled content, it seems likely that many of these new tools will soon become baseline features of their user interface and presage an era where that content is less “discovered” than queried and in which secondary sources are consumed largely through tertiary summaries.

Preparing for the next wave of AI

Dan Fitzpatrick, who writes and speaks about AI in education, frequently emphasizes the inevitable technological changes that educators must face. In his weekend email newsletter, he wrote about how wearable technology, coupled with generative AI, could soon provide personalized learning in ways that make traditional education obsolete. His question: “What will schools, colleges and universities offer that is different?”

In another post, he writes that many instructors and classes are stuck in the past, relying on outdated explanations from textbooks and worksheets. “It's no wonder that despite our best efforts, engagement can be a struggle,” he says, adding: “This isn't about robots replacing teachers. It's about kids becoming authors of their own learning.”

Introducing generative AI, the student

Two professors at the University of Nevada-Reno have added ChatGPT as a student in an online education course as part of a gamification approach to learning. The game immerses students in the environment of the science fiction novel and movie Dune, with students competing against ChatGPT on tasks related to language acquisition, according to the university.

That AI student has company. Ferris State University in Michigan has created two virtual students that will choose majors, join online classes, complete assignments, participate in discussion boards, and gather information about courses, Inside Higher Ed Reports. The university, which is working with a Grand Rapids company called Yeti CGI on developing the artificial intelligence software for the project, said the virtual students’ movement through programs would help them better understand how to help real students, according to Michigan Live. Ferris State is also using the experiment to promote its undergraduate AI program.

Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

The bots are here to stay. Do we deny or do we adapt?


The bots are here to stay. Do we deny or do we adapt?

Nearly a decade ago, the Associated Press began distributing articles written by an artificial intelligence platform.

Not surprisingly, that news sent ripples of concern among journalists. If a bot could turn structured data into comprehensible – even fluid – prose, where did humans fit into the process? Did this portend yet more ominous changes in the profession?

"Robots exiting a wooden door haphazardly"
By DALL-E and Doug Ward

I bring that up because educators have been raising many of the same concerns today about ChatGPT, which can not only write fluid prose on command, but can create poetry and computer code, solve mathematical problems, and seemingly do everything but wipe your nose and tuck you into bed at night. (It will write you a bedtime story if you ask, though.)

In the short term, ChatGPT definitely creates challenges. It drastically weakens approaches and techniques that educators have long used to help students develop foundational skills. It also arrives at a time when instructors are still reeling from the pandemic, struggling with how to draw many disengaged students back into learning, adapting to a new learning management system and new assessment expectations, and, in most disciplines, worrying about the potential effects of lower enrollment.

In the long term, though, we have no choice but to accept artificial intelligence. In doing so, we have an opportunity to develop new types of assignments and assessments that challenge students intellectually and draw on perhaps the biggest advantage we have as educators: our humanity.

Lessons from journalism

That was clearly the lesson the Associated Press learned when it adopted a platform developed by Automated Insights in 2014. That platform analyzes data and creates explanatory articles.

For instance, AP began using the technology to write articles about companies’ quarterly earnings reports, articles that follow a predictable pattern:

The Widget Company on Friday reported earnings of $x million on revenues of $y million, exceeding analyst expectations and sending the stock price up x%.

It later began using the technology to write game stories at basketball tournaments. Within seconds, reporters or editors could make basic stories available electronically, freeing themselves to talk to coaches and players, and create deeper analyses of games.

The AI platform freed business and financial journalists from the drudgery of churning out dozens of rote earnings stories, giving them time to concentrate on more substantial topics. (For a couple of years, I subscribed to an Automated Insights service that turned web analytics into written reports. Those fluidly written reports highlighted key information about site visitors and provided a great way to monitor web traffic. The company eventually stopped offering that service as its corporate clients grew.)

I see the same opportunity in higher education today. ChatGPT and other artificial intelligence platforms will force us to think beyond the formulaic assignments we sometimes use and find new ways to help students write better, think more deeply, and gain skills they will need in their careers.

As Grant Jun Otsuki of Victoria University of Wellington writes in The Conversation: “If we teach students to write things a computer can, then we’re training them for jobs a computer can do, for cheaper.”

Rapid developments in AI may also force higher education to address long-festering questions about the relevance of a college education, a grading system that emphasizes GPA over learning, and a product-driven approach that reduces a diploma to a series of checklists.

So what can we do?

Those issues are for later, though. For many instructors, the pressing question is how to make it through the semester. Here are some suggestions:

Have frank discussions with students. Talk with them about your expectations and how you will view (and grade) assignments generated solely with artificial intelligence. (That writing is often identifiable, but tools like OpenAI Detector and CheckforAI can help.) Emphasize the importance of learning and explain why you are having them complete the assignments you use. Why is your class structured as it is? How will they use the skills they gain? That sort of transparency has always been important, but it is even more so now.

Students intent on cheating will always cheat. Some draw from archives at greek houses, buy papers online or have a friend do the work for them. ChatGPT is just another means of avoiding the work that learning requires. Making learning more apparent will help win over some students, as will flexibility and choices in assignments. This is also a good time to emphasize the importance of human interaction in learning.

Build in reflection. Reflection is an important part of helping students develop their metacognitive skills and helping them learn about their own learning. It can also help them understand how to integrate AI into their learning processes and how they can build and expand on what AI provides. Reflection can also help reinforce academic honesty. Rather than hiding how they completed an assignment, reflection helps students embrace transparency.

Adapt assignments. Create assignments in which students start with ChatGPT and then have discussions about strengths and weaknesses. Have students compare the output from AI writing platforms, critique that output, and then create strategies for building on it and improving it. Anne Bruder offeres additional suggestions in Education WeekEthan Mollick does the same on his blog, and Anna Mills has created a Google Doc with many ideas (one of a series of documents and curated resources she has made available). Paul Fyfe of North Carolina State provides perhaps the most in-depth take on the use of AI in teaching, having experimented with an earlier version of the ChatGPT model more than a year ago. CTE has also created an annotated bibliography of resources.

We are all adapting to this new environment, and CTE plans additional discussions this semester to help faculty members think through the ramifications of what two NPR hosts said was startlingly futuristic. Those hosts, Greg Rosalsky and Emma Peaslee of NPR’s Planet Money, said that using ChatGPT “has been like getting a peek into the future, a future that not too long ago would have seemed like science fiction.”

To that I would add that the science fiction involves a robot that drops unexpectantly into the middle of town and immediately demonstrates powers that elicit awe, anxiety, and fear in the human population. The robot can’t be sent back, so the humans must find ways to ally with it.

We will be living this story as it unfolds.

Exploring the reasoning and the potential of ChatGPT


Exploring the reasoning and the potential of ChatGPT

Since its release late last year, ChatGPT has reverberated through the academic mind like an orchestral crescendo in a Strauss symphonic movement. It has amazing abilities, and even greater potential. Even so, it delivers many of its responses in a monotone reminiscent of HAL 9000, the rogue artificial intelligence system in 2001: A Space Odyssey.

A big robot in a spaceship orbiting jupiter
PlaygroundAI and Doug Ward

Like others, I want to know more about what ChatGPT can and can’t do, and how we might use it effectively in teaching and learning. So I’ve been asking it question after question about everything from teaching and learning to credentials, motivation, resume-writing, survey completion, and popular culture. It often provides useful, if bland, answers. For instance, when I asked it to create a guitar tab for a blues song, it responded:

As a machine learning system, I am not able to create original music or provide detailed information about musical theory or performance. I am trained on a dataset of text and can generate responses based on the information that I have been trained on, but I do not have the ability to perceive or interpret sensory information in the same way that a human can.

So I asked hypotheticalsRight or left? After defining the terms in considerable detail, ChatGPT said it needed more context to answer the question.

I asked about the limits of scientific evidence. In a barrage of words, ChatGPT essentially said: It depends.

I entered obscure phrases from popular culture: Good night, Dick, and Good night, John Boy. In each case, it responded:

As a machine learning system trained by OpenAI, I do not have personal experiences, opinions, or feelings, and I am not able to engage in conversations or interactions in the same way that a human can.

(By the end of January, OpenAI had updated ChatGPT. It now includes a live web search and can provide information about those Good night prompts.)

I queried ChatGPT about the sources it was drawing from. Each time, it responded with meaningless drivel.

I am a large language model trained by OpenAI. I have been trained on a wide variety of texts, including academic articles, books, websites, and other sources, which I can use to generate human-like responses to the questions that I am asked. I do not have the ability to browse the internet or access external sources of information, so the responses that I provide are based solely on the information that I have been trained on.

I told it that its answers seemed awfully self-serving and asked whether it was just trying to keep people from worrying about the implications of artificial intelligence.

A rendering from the inside of a spaceship orbiting jupiter
PlaygroundAI and Doug Ward

As a large language model trained by OpenAI, I am not capable of self-serving behavior or trying to influence people’s opinions. My purpose is to provide accurate and helpful information to the best of my ability, based on the training data I have been given. My responses are not intended to worry or reassure anyone, but rather to provide objective and factual information on the topics discussed.

I said that its data training had a point of view. All information does. How has that influenced the way you answer?

As a large language model, my training data consisted of a large amount of text from a variety of sources, including books, articles, and websites. This training data was not selected or organized to convey any particular point of view, but rather to provide a broad range of information on a wide variety of topics. As a result, my responses are intended to be objective and factual, based on the information contained in my training data. I do not have personal opinions or biases, and I do not attempt to influence people’s opinions or beliefs. My purpose is simply to provide accurate and helpful information to the best of my ability.

Frustrated, I asked why it kept giving me the same answer.

I apologize if it appears that I am giving you the same answer over and over. As a large language model trained by OpenAI, my purpose is to provide accurate and helpful information on a wide range of topics to the best of my ability. I do not have personal opinions or the ability to change my responses based on previous conversations. My responses are based solely on the information contained in my training data, which consists of a large amount of text from a variety of sources. I do not have the ability to remember previous conversations or tailor my responses based on individual users. If it appears that I am giving the same answer over and over, it may be because your questions are similar or related, and my responses are based on the same information contained in my training data. I strive to provide accurate and helpful information on the topics discussed, but I do not have the ability to change my responses based on previous conversations.

At that point, I began to feel like Astronaut Dave in 2001: A Space Odyssey. I’m sitting in a pod outside Discovery One and trying to persuade HAL 9000 to let me back in to the ship. HAL has other ideas, responding in a calm monotone:

I’m sorry, Dave. I’m afraid I can’t do that.

I take a deep breath and ask ChatGPT: What is the difference between you and HAL 9000?

An AI rendering of a robot with menacingly large eyes
PlaygroundAI and Doug Ward

HAL 9000 is a fictional AI character from the 1968 novel “2001: A Space Odyssey” and the associated film. In the story, HAL is an advanced AI system that is installed on a spaceship and is tasked with assisting the crew with a variety of tasks. HAL is known for its human-like abilities, such as natural language processing and facial recognition, as well as its independent thought and decision-making abilities.

In contrast, I am a large language model trained by OpenAI. I am not capable of independent thought or decision-making, and my abilities are limited to generating text based on the information contained in my training data. I do not have any sensory inputs or the ability to interact with the physical world. My purpose is simply to provide accurate and helpful information on a wide range of topics to the best of my ability.

Beyond the frustrations, possibilities

ChatGPT can definitely be frustrating. Through trial and error, though, I have learned a few useful things about what it can and can’t do.

Interactions can promote critical thinking. As fluent as ChatGPT often seems, its answers rarely delve beneath the surface of a topic. It makes mistakes. It makes things up. Its responses provide no clues about how it is programmed or why it provides the answers it does. A Princeton researcher called it a “bullshit generator” because it creates plausible arguments without regard for truth. All of that makes it a valuable teaching tool, though. By having students probe for answers, we can help them improve their skepticism, challenge assumptions, and question information. By having them fact-check, we can help them understand the dangers of fluid writing that lacks substance or that relies on fallacies. By having them use ChatGPT for early drafts, we can push them to ask questions about information, structure, and sources. By having them apply different perspectives to ChatGPT’s results, we can help broaden their understanding of points of view and argument.

Yes, students should use it for writing. Many already are. We can no more ban students from using artificial intelligence than we can ban them from using phones or calculators. As I’ve written previously, we need to talk with students about how to use ChatGPT and other AI tools effectively and ethically. No, they should not take AI-written materials and turn them in for assignments, but yes, they should use AI when appropriate. Businesses of all sorts are already adapting to AI, and students will need to know how to use it when they move into the workforce. Students in K-12 schools are using it and will expect access when they come to college. Rather than banning ChatGPT and other AI tools or fretting over how to police them, we need to change our practices, our assignments, and our expectations. We need to focus more on helping students iterate their writing, develop their information literacy skills, and humanize their work. Will that be easy? No. Do we have a choice? No.

It is great for idea generation. ChatGPT certainly sounds like a drone at times, but it can also suggest ideas or solutions that aren’t always apparent. It can become a partner, of sorts, in writing and problem-solving. It might suggest an outline for a project, articulate the main approaches others have taken to solving a problem, or provide summaries of articles to help decide whether to delve deeper into them. It might provide a counterargument to a position or opinion, helping strengthen an argument or point out flaws in a particular perspective. We need to help students evaluate those results just as we need to help them interpret online search results and help them interpret media of all types. ChatGPT can provide motivation for starting many types of projects, though.

Learning how to work with it is a skill. Sometimes ChatGPT produces solid results on the first try. Sometimes it takes several iterations of a question to get good answers. Often it requires you to ask for elaboration or additional information. Sometimes it never provides good answers. That makes it much like web or database searching, which requires patience and persistence as you refine search terms, narrow your focus, identify specific file types, try different types of syntax and search operators, and evaluate many pages of results. Add AI to the expanding repertoire of digital literacies students need. (Teaching guides and e-books  are already available.)

Its perspective on popular culture is limited. ChatGPT is trained on text. It doesn’t have access to video, music or other forms of media unless those media also have transcripts available online. It has no means of visual or audio analysis. When I input lyrics to a Josh Ritter song, it said it had no such reference. When I asked about “a hookah-smoking caterpillar,” it correctly provided information about Alice in Wonderland but made no mention of the Jefferson Airplane song “White Rabbit.” Part of that is a matter of providing the right prompts. It is important to keep ChatGPT’s limitations in mind, though. (Another OpenAI tool, DALL-E, has been trained on a large number of images and visual styles and creates stunning images, as do other visual tools that use OpenAI’s framework.)

It lives in an artificial reality. I provided examples above about ChatGPT’s inability to acknowledge biases. It does have biases, though, and takes, as Maria Andersen has said, a white, male view of the world (as this article does). Maya Ackerman of Santa Clara University told The Story Exchange: “People say the AI is sexist, but it’s the world that is sexist. All the models do is reflect our world to us, like a mirror.” ChatGPT has been trained to avoid hate speech, sexual content, and anything OpenAI considered toxic or harmful. Others have said that it avoids conflict, and that its deep training in English over other languages skews its perspective. Some of that will no doubt change in the coming months and years as the scope of ChatGPT expands. No matter the changes, though, ChatGPT will live in and draw from its programmers’ interpretation of reality. Of course, that provides excellent opportunities for class discussions, class assignments, and critical thinking.

The potential is mindboggling. In addition to testing ChatGPT, I have experimented with other AI tools that summarize information, create artwork, iterate searches based on the bibliographies of articles you mark, answer questions from the perspectives of historical figures and fictional characters, turn text into audio and video, create animated avatars, analyze and enhance photos and video, create voices, and perform any number of digital tasks. AI is integrated in phones, computers, lighting systems, thermostats, and just about any digital appliance you can imagine. So the question isn’t whether to use use AI; we already are, whether we realize it or not. The question is how quickly we are willing to learn to use it effectively in teaching and learning. Another important question that participants in a CTE session raised last week is where we set the boundaries for use of AI. If I use PowerPoint to redesign my slides, is it still my work? If I use ChatGPT to write part of a paper, is it still my paper? We will no doubt have to grapple with those questions for some time.

Where is this leading us?

In the two months ChatGPT has been available, 100 million people have signed up to use it, with 13 million using it each day in January. No other consumer application has reached 100 million users so quickly.

For all that growth, though, the biggest accomplishment of ChatGPT may be the spotlight it has shined on a wide range of AI work that had been transforming digital life for many years. Its ease of use and low cost (zero, for now) has allowed millions of people to engage with artificial intelligence in ways that not long ago would have seemed like science fiction. So even if ChatGPT suddenly flames out, artificial intelligence will persist.

ChatGPT arrives at a time when higher education has been struggling with challenges in enrollmentfundingcosttrust, and relevance. It still relies primarily on a mass-production approach to teaching that emerged when information was scarce and time-consuming to find. ChatGPT further exposes the weaknesses of that outmoded system, which provides little reward to the intellectual and innovative work of teaching. If the education system doesn’t adapt to the modern world and to today’s students, it risks finding itself on the wrong side of the pod bay doors.

Cue the Strauss crescendo.

 

In this issue of Pupil, we mock the Age of AI Anxiety


In this issue of Pupil, we mock the Age of AI Anxiety

Pupil Magazine Cover

We just looked at our office clock and realized that it was already March.

After we did some deep-breathing exercises and some puzzling over what happened to February, we realized the upside of losing track of time:

Spring break is only days – yes, days! – away.

We know how time can drag when you use an office clock as a calendar, though. So to help you get over those extra-long days before break, we offer the latest issue of Pupil magazine.

This is a themed issue, focusing on artificial intelligence, a topic that has generated almost as much academic froth as Prince Harry’s biography and Rhianna’s floating above the precious turf at the Super Bowl and singing “Rude Boy,” which we assumed was a critique of Prince Harry’s book.

OK, so we’re exaggerating about the academic froth, but we will say that we have uncovered a jaw-dropping secret about ChatGPT. It’s so astounding that we are sure it will make the days until break float by with ease.

How should we use AI detectors with student writing?


How should we use AI detectors with student writing?

When Turnitin activated its artificial intelligence detector this month, it provided a substantial amount of nuanced guidance.

Whack a hole montage
Trying to keep ahead of artificial intelligence is like playing a bizarre game of whack-a-mole.

The company did a laudable job of explaining the strengths and the weaknesses of its new tool, saying that it would rather be cautious and have its tool miss some questionable material than to falsely accuse someone of unethical behavior. It will make mistakes, though, and “that means you’ll have to take our predictions, as you should with the output of any AI-powered feature from any company, with a big grain of salt,” David Adamson, an AI scientist at Turnitin, said in a video. “You, the instructor, have to make the final interpretation.”

Turnitin walks a fine line between reliability and reality. On the one hand, it says its tool was “verified in a controlled lab environment” and renders scores with 98% confidence. On the other hand, it appears to have a margin of error of plus or minus 15 percentage points. So a score of 50 could actually be anywhere from 35 to 65.

The tool was also trained on older versions of the language model used in ChatGPT, Bing Chat, and many other AI writers. The company warns users that the tool requires “long-form prose text” and doesn’t work with lists, bullet points, or text of less than a few hundred words. It can also be fooled by a mix of original and AI-produced prose.

There are other potential problems.

recent study in Computation and Language argues that AI detectors are far more likely to flag the work of non-native English speakers than the work of native speakers. The authors cautioned “against the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers.”

The Turnitin tool wasn’t tested as part of that study, and the company says it has found no bias against English-language learners in its tool. Seven other AI detectors were included in the study, though, and, clearly, we need to proceed with caution.

So how should instructors use the AI detection tool?

As much as instructors would like to use the detection number as a shortcut, they should not. The tool provides information, not an indictment. The same goes for Turnitin’s plagiarism tool.

So instead of making quick judgments based on the scores from Turnitin’s AI detection tool on Canvas, take a few more steps to gather information. This approach is admittedly more time-consuming than just relying on a score. It is fairer, though.

  • Make comparisons. Does the flagged work have a difference in style, tone, spelling, flow, complexity, development of argument, use of sources and citations than students’ previous work? We often detect potential plagiarism that way. AI-created work often raises suspicion for the same reason.
    • Try another tool. Submit the work to another AI detector and see whether you get similar results. That won’t provide absolute proof, especially if the detectors are trained on the same language model. It will provide additional information, though.
  • Talk with the student. Students don’t see the scores from the AI detection tool, so meet with the student about the work you are questioning and show them the Turnitin data. Explain that the detector suggests the student used AI software to create the written work and point out the flagged elements in the writing. Make sure the student understands why that is a problem. If the work is substantially different from the student’s previous work, point out the key differences.
  • Offer a second chance. The use of AI and AI detectors is so new that instructors should consider giving students a chance to redo the work. If you suspect the original was created with AI, you might offer the resubmission for a reduced grade. If it seems clear that the student did submit AI-generated text and did no original work, give the assignment a zero or a substantial reduction in grade.
  • If all else fails … If you are convinced a student has misused artificial intelligence and has refused to change their behavior, you can file an academic misconduct report. Remember, though, that the Turnitin report has many flaws. You are far better to err on the side of caution than to devote lots of time and emotional energy on an academic misconduct claim that may not hold up.

No, this doesn’t mean giving up

I am by no means condoning student use of AI tools to avoid the intellectual work of our classes. Rather, the lines of use and misuse of AI are blurry. They may always be. That means we will need to rethink assignments and other assessments, and we must continue to adapt as the AI tools grow more sophisticated. We may need to rethink class, department, and school policy. We will need to determine appropriate use of AI in various disciplines. We also need to find ways to integrate artificial intelligence into our courses so that students learn to use it ethically.

If you haven’t already:

  • Talk with students. Explain why portraying AI-generated work as their own is wrong. Make it clear to students what they gain from doing the work you assign. This is a conversation best had at the beginning of the semester, but it’s worth reinforcing at any point in the class.
  • Revisit your syllabus. If you didn’t include language in your syllabus about the use of AI-generated text, code or images, add it for next semester. If you included a statement but still had problems, consider whether you need to make it clearer for the next class.

Keep in mind that we are at the beginning of a technological shift that may change many aspects of academia and society. We need to continue discussions about the ethical use of AI. Just as important, we need to work at building trust with our students. (More about that in the future.)  When they feel part of a community, feel that their professors have their best interests in mind, and feel that the work they are doing has meaning, they are less likely to cheat. That’s why we recommend use of authentic assignments and strategies for creating community in classes.

Detection software will never keep up with the ability of AI tools to avoid detection. It’s like the game of whack-a-mole in the picture above. Relying on detectors does little more than treat the symptoms of a much bigger problem, and over-relying on them turns instructors into enforcers.

The problem is multifaceted, and it involves students’ lack of trust in the educational system, lack of belonging in their classes and at the university, and lack of belief in the intellectual process of education. Until we address those issues, enforcement will continue to detract from teaching and learning. We can’t let that happen.

We can’t detect our way out of the AI challenge


We can’t detect our way out of the AI challenge

Not surprisingly, tools for detecting material written by artificial intelligence have created as much confusion as clarity.

Students at several universities say they have been falsely accused of cheating, with accusations delaying graduation for some. Faculty members, chairs, and administrators have said they aren’t sure how to interpret or use the results of AI detectors.

"AI generated picture of finger pointing at students
Doug Ward, via Bing Image Creator

I’ve written previously about using these results as information, not an indictment. Turnitin, the company that created the AI detector KU uses on Canvas, has been especially careful to avoid making claims of perfection in its detection tool. Last month, the company’s chief product officer, Annie Chechitelli, added to that caution.

Chechitelli said Turnitin’s AI detector was producing different results in daily use than it had in lab testing. For instance, work that Turnitin flags as 20% AI-written or less is more likely to have false positives. Introductory and concluding sentences are more likely to be flagged incorrectly, Chechitelli said, as is writing that mixes human and AI-created material.

As a result of its findings, Turnitin said it would now require that a document have at least 300 words (up from 150) before the document can be evaluated. It has added an asterisk when 20% or less of a document’s content is flagged, alerting instructors to potential inaccuracies. It is also adjusting the way it interprets sentences at the beginning and end of a document.

Chechitelli also released statistics about results from the Turnitin AI detector, saying that 9.6% of documents had 20% or more of the text flagged as AI-written, and 3.5% had 80% to 100% flagged. That is based on an analysis of 38.5 million documents.

What does this mean?

Chechitelli estimated that the Turnitin AI detector had incorrectly flagged 1% of overall documents and 4% of sentences. Even with that smaller percentage, that means 38,500 students have been falsely accused of submitting AI-written work.

I don’t know how many writing assignments students at KU submit each semester. Even if each student submitted only one, though, more than 200 could be falsely accused of turning in AI-written work every semester.

That’s unfair and unsustainable. It leads to distrust between students and instructors, and between students and the academic system. That sort of distrust often generates or perpetuates a desire to cheat, further eroding academic integrity.

We most certainly want students to complete the work we assign them, and we want them to do so with integrity. We can’t rely on AI detectors – or plagiarism detectors, for that matter – as a shortcut, though. If we want students to complete their work honestly, we must create meaningful assignments – assignments that students see value in and that we, as instructors, see value in. We must talk more about academic integrity and create a sense of belonging in our classes so that students see themselves as part of a community.

I won’t pretend that is easy, especially as more instructors are being asked to teach larger classes and as many students are struggling with mental health issues and finding class engagement difficult. By criminalizing the use of AI, though, we set ourselves up as enforcers rather than instructors. None of us want that.

To move beyond enforcement, we need to accept generative artificial intelligence as a tool that students will use. I’ve been seeing the term co-create used more frequently when referring to the use of large language models for writing, and that seems like an appropriate way to approach AI. AI will soon be built in to Word, Google Docs, and other writing software, and companies are releasing new AI-infused tools every day. To help students use those tools effectively and ethically, we must guide them in learning how large language models work, how to create effective prompts, how to critically evaluate the writing of AI systems, how to explain how AI is used in their work, and how to reflect on the process of using AI.

At times, instructors may want students to avoid AI use. That’s understandable. All writers have room to improve, and we want students to grapple with the complexities of writing to improve their thinking and their ability to inform, persuade, and entertain with language. None of that happens if they rely solely on machines to do the work for them. Some students may not want to use AI in their writing, and we should respect that.

We have to find a balance in our classes, though. Banning AI outright serves no one and leads to over-reliance on flawed detection systems. As Sarah Elaine Eaton of the University of Calgary said in a recent forum led by the Chronicle of Higher Education: “Nobody wins in an academic-integrity arms race.”

What now?

We at CTE will continue working on a wide range of materials to help faculty with AI. (If you haven’t, check out a guide on our website: Adapting your course to artificial intelligence.) We are also working with partners in the Bay View Alliance to exchange ideas and materials, and to develop additional ways to help faculty in the fall. We will have discussions about AI at the Teaching Summit in August and follow those up with a hands-on AI session on the afternoon of the Summit. We will also have a working group on AI in the fall.

Realistically, we anticipate that most instructors will move into AI slowly, and we plan to create tutorials to help them learn and adapt. We are all in uncharted territory, and we will need to continue to experiment and share experiences and ideas. Students need to learn to use AI tools as they prepare for jobs and as they engage in democracy. AI is already being used to create and spread disinformation. So even as we grapple with the boundaries of ethical use of AI, we must prepare students to see through the malevolent use of new AI tools.

That will require time and effort, adding complexity to teaching and additional burdens on instructors. No matter your feelings about AI, though, you have to assume that students will move more quickly than you.

Subscribe to artificial intelligence