This content is being reviewed in light of recent changes to federal guidance.

How a new Copilot tool might be used in teaching


Professor touches computer screen as letters emerge from the screen

 

By Doug Ward

The KU version of Copilot now allows the creation of agents, which means you can customize Copilot and give it instructions on what you want it to do, how you want it to respond, and what format its output should follow. 

An agent still uses Copilot’s foundational training, but the instructions can reduce the need for long, complex prompts and speed up tasks you perform regularly. You can also direct the agent to websites you would like it to draw on, and create starter prompts for users.

Copilot has also gained another function: the ability to store prompts for reuse. That isn’t nearly as useful as creating agents, but both additions give users additional control over Copilot and should make it more useful for many faculty members, staff members, and graduate students. (I don’t know whether the new functions are available to undergraduates, but they probably are.)

These features have been available for some time in paid versions of Copilot. What is new is the access available when you use your KU credentials to log in to Copilot, which is Microsoft’s main generative artificial intelligence tool.

Potential and limitations 

Agents have the potential to improve the accuracy of responses of Copilot because the directions you provide limit the scope of Copilot’s actions and tailor the tone and substance of those responses. Accuracy also improves if you give Copilot examples and specific material to work with (an uploaded document, for instance).

If you log in with your KU ID, Copilot also has additional layers of data protection. For instance, material you use in Copilot isn’t used for training of large language models. It is also covered by the same privacy protections that KU users have with such tools as Outlook and OneDrive.

In addition to potential, Copilot has several limitations. Those include:

  • Customization restrictions. A Copilot agent allows you to provide up to 8,000 characters, or about 1,500 words, of guidance. That guidance is essentially an extended prompt created with natural language, but it includes any examples you provide or specific information you want your agent to draw on. The 8,000 characters may seem substantial, but that count dwindles quickly if you provide examples and specific instructions. 
  • Input restrictions. Once you create an agent, Copilot also has an input limit of 8,000 characters. That includes a prompt and whatever material you want Copilot to work with. If you have given your agent substantial instructions, you shouldn’t need much of a prompt, so you should be able to upload a document of about 1,500 words, a spreadsheet with 800 cells, or a PowerPoint file with eight to 16 slides. (Those are just estimates.) The limit on code files will vary depending on the language and the volume of documentation and comments. For instance, Python, Java and HTML will use up the character count more quickly. The upshot is that you can’t use a Copilot agent to analyze long, complex material – at least in the version we have at KU. (The 8,000-character limit is the same whether you use an agent or use a prompt with Copilot itself.)
  • Limit in scope. Tools like NotebookLM allow you to analyze dozens of documents at once. I haven’t found a way to do that with a Copilot agent. Similarly, I haven’t found a way to create a serial analysis of materials. For instance, there’s no way to give Copilot several documents and ask it to provide individual feedback on each. You have to load one document at a time, and each document must fall within the limits I list above. 
  • Potential fabrication. The guidance you provide to a Copilot agent doesn’t eliminate the risk of fabrication. All material created by generative AI models may include fabricated material and fabricated sources. They also have inherent biases because of the way they are trained. It is crucial to examine all AI output closely. Ultimately, anything you create or do with generative AI is only as good as your critical evaluation of that material.

An example of what you might do

I have been working with the Kansas Law Enforcement Training Center, a branch of KU that provides training for officers across the state. It is located near Hutchinson.

One component of the center’s training involves guiding officers in writing case reports. Those reports provide brief accounts of crimes or interactions an officer has after being dispatched. They are intended to be factual and accurate. At the training center, officers write practice reports, and center staff members provide feedback. This often involves dozens of reports at a time, and the staff wanted to see whether generative AI could help with the process.

Officers have the same challenges as all writers: spelling, punctuation, grammar, consistency, and other structural issues. Those issues provided the basis for a Copilot agent I created. That agent allows the staff to upload a paper and, with a short prompt, have Copilot generate feedback. A shareable link allows any of the staff members to use the agent, improving the consistency of feedback. The agent is still in experimental stages, but it has the potential to save the staff many hours they can use for interacting with officers or working with other aspects of training. It should also allow them to provide feedback much more quickly.

Importantly, the Copilot agent keeps the staff member in control. It creates a draft that the staff member can edit or expand on before providing feedback to the officer. That is, Copilot provides a starting point, but the staff members must draw on their own expertise to evaluate that output and decide what would be useful to the officer.

Other potential uses

If you aren’t sure whether you could use a Copilot agent in your teaching-related work, consider how you might use a personal assistant who helps with your class. What areas do students struggle with? What do they need help with when you aren’t available? What do they need more practice with? How can you help students brainstorm and refine ideas for projects and papers? What aspects of your class need to be re-envisioned? What tasks might you give an assistant to free up your time?

For instance, a CTE graduate fellow hopes to create an agent to help students learn MLA and APA style. I have written previously about how Copilot can be used as a coach for research projects. Many faculty members at the University of Sydney have created agents for such tasks as tutoring, skill development, and feedback to students. Their agents have been used to help students in large classes prepare for exams; help faculty create case studies and provide feedback on student work; help students troubleshoot problems, improve grammar skills, practice interviewing, better understand lecture content, create research proposals, and get answers to general questions about a class when an instructor isn’t available. Those faculty members are in fields such as biology, occupational therapy, biochemistry, education, social work, psychology, nursing, and journalism. 

Some of the examples at the University of Sydney may be difficult for KU faculty to emulate because Sydney has a custom-built system called Cogniti. That system uses Copilot agents but has more sophisticated tools than KU has. Microsoft has also created many types of agents. As with the examples from Sydney, some are beyond the capabilities of the system we have access to at KU, but they can give you a sense of what is possible.

If you decide to create your own agent, I explain in a separate article how you can do that. My goal is to help instructors explore ways to use generative artificial intelligence proactively rather than feel like they are constantly fighting against its misuse. If nothing else, creating guidance for an agent can help you better articulate steps students can take to improve their learning and identify areas of your class you might want to improve.


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

Posted on by Doug Ward
Tagged artificial intelligence, Copilot, Copilot agents