AI trends that are shaping the future of education
By Doug Ward
A few eye-popping statistics help demonstrate the growing reach of generative AI:
- Use of ChatGPT has quadrupled in the past year, to 700 million weekly users. It has become the fifth-most-visited website.
- ChatGPT accounts for more than 80% of all chatbot sessions each month, followed by Copilot (9.5%), Perplexity (6.1%), and Gemini (2.7%).
- Nearly 90% of college students worldwide say they have used generative AI in their schoolwork.
Beneath the growing use of generative artificial intelligence lie many trends and emerging habits shaping the future of technology, jobs, and education. Social, political, and economic forces were already creating tectonic shifts beneath educational institutions. Generative AI has added to and accelerated the tremors over the past two and a half years, leaving many educators feeling angry and powerless.

Regardless of our views about generative AI, we must adapt. That will mean rethinking pedagogy, assignments, grading, learning outcomes, class structures, majors, and perhaps even disciplines. It will mean finding ways of integrating generative AI into assignments and helping students prepare to use AI in their jobs. That doesn’t mean all AI all the time. It does mean making skill development more transparent, working harder at building trust among students, and articulating the value of learning. It means having frequent conversations with students about what generative AI is and what it can and can’t do. It means helping students understand that getting answers from chatbots is no substitute for the hard work of learning. Importantly, it means ending the demonization of AI use among students and recognizing it as a tool for learning.
I’ll be writing more about that in the coming year. As a prelude, I want to share some of the significant trends I see as we head into year three of ChatGPT and a generative AI world.
Use of generative AI
Younger people are far more likely to use generative AI than older adults are. According to a Pew Research Center survey, 58% of 18- to 29-year-olds have used ChatGPT, compared with 34% of all adults. In late 2024, more than a quarter of 13- to 17-year-olds said they had used generative AI for schoolwork, Pew Research said. As teenagers make AI use a habit, we can expect them to continue that habit in college.
Young people have long been quicker to adopt digital technology than their parents and grandparents (and their teachers). They are less set in their ways, and they gravitate toward technology that allows them to connect and communicate, and to create and interact with media. Once again, they are leading changes in technology use.
AI use among college students is widespread
In a worldwide survey of college students, 86% said they had used AI in their studies, and many students say generative AI has become essential to their learning. In interviews and a focus group conducted by The Chronicle of Higher Education, students said they used AI to brainstorm ideas, find weak areas in their writing, create schedules and study plans, and make up for poor instruction.
Some students said they relied on AI summaries rather than reading papers or books, complaining that reading loads were excessive. Others rely on generative AI to tutor them because they either can’t make it to professors’ office hours, don’t want to talk with the professors, or don't think professors can help them. Some students also use ChatGPT to look up questions in class rather than participate in discussion. Some, of course, use generative AI to complete assignments for them.
That use of AI to avoid reading, writing, and discussion is frustrating for faculty members. Those activities are crucial to learning. Many students, though, see themselves as being efficient. We need to do a better job of explaining the value of the work we give students, but we also need to scrutinize our assignments and consider ways of approaching them differently. Integrating AI literacy into courses will also be critical. Students need – and generally want – help in learning how to use generative AI tools effectively. They also need help in learning how to learn, a skill they will need for the rest of their lives.
Most faculty have been skeptical of generative AI
Most instructors lack the time or desire to master use of AI or to make widescale changes to classes to adapt to student use of AI. A Pew poll suggests that women in academia are considerably more skeptical of generative AI than men are, and U.S. and Canadian educators are more skeptical of generative AI than their counterparts in other countries. Research also reinforces what was already apparent: Generative AI can impede learning if students use it to replace their thinking and engagement with coursework.
All of that has created feelings of resentment, helplessness, and a hardening of resistance. Some instructors say AI has devalued teaching. Others describe it as frightening or demoralizing. In a New York Times opinion piece, Meghan O’Rourke writes about the almost seductive powers of ChatGPT she felt as she experimented with generative AI. Ultimately, though, O'Rourke, a creative writing professor at Yale, described large language models as “intellectual Soylent Green,” a reference to the science fiction film in which the planet is dying and the food supply is made of people.
Educators are facing “psychological and emotional” issues as they try to figure out how to handle generative AI in their classes. I have seen this firsthand, although AI is just one of many other forces bearing down on faculty. I’ve spoken with faculty members who feel especially demoralized when students turn in lifeless reflections that were obviously AI-generated. "I want to hear what you think," one instructor said she had told her students. Collectively, this has led to what one educator called an existential crisis for academics.
Use of AI in peer review creeps upward
Some publishers have begun allowing generative AI to help speed up peer review and deal with a shortage of reviewers. That, in turn, has led some researchers to add hidden prompts in papers to try to gain more favorable reviews, according to Inside Higher Ed. A study in Science Advances argues that more than 13% of researchers in biomedical research used generative AI to create abstracts in 2024.
Use among companies continues to grow
By late 2024, 78% of businesses were using some form of AI in their operations, up from 55% in 2023. Many of those companies are shifting to use of local AI systems rather than cloud systems, in large part for security reasons. Relatedly, unemployment rates for new graduates have increased, with some companies saying that AI can do the work of entry-level employees. Hiring has slowed the most in information, finance, insurance, and technical services fields, and many highly paid white-collar jobs may be at risk. The number of internships has also declined. The CEO of Anthropic has warned that AI could lead to the elimination of up to half of entry-level white-collar jobs. If anything even close to that occurs, it will destroy the means for employees to gain experience and raise even more questions about the value of a college education in its current form.
Efforts to promote use of AI
Federal government makes AI a priority in K-12
The Department of Education has made use of AI a priority for K-12 education, calling for integration of AI into teaching and learning, creation of more computer science classes, and the use of AI to “promote efficiency in school and classroom operations,” improve teacher training and evaluation, and support tutoring. It mentions “AI literacy,” but implies that that means learning to use AI tools (which is only part of what students need). Technology companies have responded by providing more than $20 million to help create an AI training hub for K-12 teachers. The digital publication District Administration says education has reached “a turning point” with AI, as pressure grows for adoption of AI even as federal focus on ethics and equity has faded and federal guidelines do little to promote accountability, privacy, or data security. The push for more technology skills in K-12 comes as the growth in computer science majors at universities has stalled as students evaluate their job prospects amid layoffs at technology companies. That push also means that students are likely to enter college with considerable experience using generative AI in coursework, potentially deepening the conflicts with faculty if colleges and universities fail to adapt.
Canvas to add generative AI
Instructure plans to embed ChatGPT into Canvas soon. Instructure's announcement about this is vague, though not all that surprising, especially because Blackboard has added similar capabilities. Instructure calls the new functions IgniteAI, and its says they can be used for "creating quizzes, generating rubrics, summarizing discussions, aligning content to outcomes." It says these will be opt-in features for institutions. (A Reddit post provides more details of what Instructure demonstrated at its annual conference.) What this means for the KU version of Canvas isn’t clear, but the Educational Technology staff will be evaluating the new tools.
Google and OpenAI create tools for students and teachers
Google and OpenAI have offered tailored versions of their generative AI platforms for teachers and students. Google has added Gemini to its Google for Education tools and has released Gemini for Education, pitching it as transformative because of its ability to personalize learning and "inspire fresh ideas." The free version offers only limited access to its top models and Deep Research function, but the paid version, which is used primarily by school districts, has full access.
ChatGPT has created what it calls study mode for students. OpenAI says study mode takes a Socratic approach to help “you work through problems step by step instead of just getting an answer.” A PCWorld reviewer found the tool helpful, saying it "actually makes me use my brain." MIT Technology Review said, though, that it was “more like the same old ChatGPT, tuned with a new conversation filter that simply governs how it responds to students, encouraging fewer answers and more explanations.” It said the tool was part of OpenAI’s push “to rebrand chatbots as tools for personalized learning rather than cheating.”
AI companies see education as a lucrative market. By one estimate, educational institutions' spending on AI will grow by 37% a year over the next five years. Magic School, Curipod, Khanmigo, and Diffit are just four of many AI-infused tools created specifically for educators and students. That is important because student use in K-12 normalizes generative AI as part of the learning process.
To attract more students to ChatGPT, OpenAI made its pro version free for students for a few months in the spring. Google went even further, offering the pro version of Gemini free to students for a year. That means many students have access to more substantial generative AI tools than faculty do.
Social and technological trends
Online search is changing quickly
Nearly every search engine now uses generative AI to create summaries rather than providing lists of links. Those summaries usually cite only a small number of articles, and the chief executive of the Atlantic said Google was “shifting from being a search engine to an answer engine." As a result, fewer people are clicking on links to articles, and publishers report fewer visits to websites. News sites and other organizations that rely on advertising report substantial declines in web traffic. Bryan Alexander speculates that if this trend continues, we could see a decline in the web as an information source. The Wall Street Journal said companies’ use of generative AI was “rewiring how the internet is used altogether.” This poses yet another challenge for educators as students draw on AI summaries rather than working through articles and synthesizing information on their own.
Use of AI agents is spreading
Agents allow AI systems to act autonomously. They generally work in sequence (or in tandem) to complete a task. A controlling bot (a parent) sends commands to other bots (child systems), which execute commands, gather and check information, and either act on their own or push information back up the line for the parent bot to act.
Businesses have been cautious about deployment of agents, in part because of cost and security. Interest and spending have intensified, though, and companies have been using agents in such areas as customer service, inventory management, code generation, fraud detection, gene analysis, and the monitoring of digital traffic. One executive has said that software as a service was turning into "agent as a service."
Software companies have also made agent technology available to the public. OpenAI's agent can log into a learning management systems and complete assignments. Perplexity's new browser uses AI to search, summarize, and automate tasks, and it has been used to write in Google Docs in a way that mimics a human pace. ChatGPT agents can complete homework assignments autonomously, connect to other applications, log into websites, and even click on the human verification boxes that many websites put up. ChatGPT has also been used to automate grading and analyze teaching. The website Imaginative says companies are in a race to create agents that "organize your day without forcing you to switch apps.” Just how effective current agents are is open to debate, but the use of autonomous systems is growing.
Many children use AI for companionship
A vast majority of teenagers prefer human friendship over AI companions, but a third say that interacting with an AI companion is at least as satisfying as speaking with a human, according to Common Sense Media. An Internet Matters report says children as young as 9 use generative AI for companionship and friendship. They practice conversations, consult about what to wear, and ask questions about such things as feelings and body image. Some college students say that generative AI is diminishing relationships with other students.
Video games gaining AI capabilities
Video game makers are experimenting with generative technology that gives characters memories and allows them to adapt to game play. Stanford and Google researchers have added simulations of real people to games. Genie, a tool from Google's DeepMind division, creates an interactive world based on user prompts or images, and allows users to change characters and scenery with additional prompts. Similar approaches are already being used in educational technology, and it seems likely that we will eventually see AI characters act as teachers that can adapt to students’ work, voices, and even facial expressions as they guide students through interactive scenarios.
Audio, video, and image abilities improve
As the speed of AI models improves, AI companies see voice as a primary means of user interaction with chatbots. Already, the general AI models like ChatGPT, Gemini, Copilot, and Claude can analyze and create images and video, act on voice commands, and converse with users. Gemini will analyze information on a screen and provide advice on using and troubleshooting applications. A company called Rolling Square has created earbuds called Natura AI, which are the only means of accessing its AI system. Users interact with agents, which the company calls “AI people,” to do nearly anything that would usually require a keyboard and screen. A company called Rabbit has made similar promises with a device it released last year. It followed up this summer with an AI agent called Intern.
That is just one aspect of voice technology. More than 20% of online searches are done by voice, and the number of voice assistants being used has doubled since 2020, to 8.4 billion. Those include such tools as Alexa (Amazon), Siri (Apple), and Gemini (Google). The use of tools like Otter, Fireflies and Teams to monitor and transcribe meetings is growing, and it is common to see someone’s chatbot as a proxy in online meetings. Students are using transcription tools to record lectures, and use of medical transcription is growing substantially. Companies are using voice agents on websites and for customer service calls, and companies and governments are using voice as a means of digital verification and security.
- AI eyeglasses. Companies are creating eyewear with AI assistants embedded in them. The glasses translate text and spoken language, read and summarize written material, search the web, take photographs and record video, recognize physical objects, and connect to phones and computers. They usually contain a small display on one lens, and some can speak to you through bone conduction speakers. The trend toward miniaturization will make keeping technology out of the classroom virtually impossible.
- AI Audio. The capability of AI systems to generate audio and music continues to improve. Technology from ElevenLabs, for example, is used in call centers, educational technology, and AI assistants. It can clone voices, change voices, or create new voices. Google’s NotebookLM creates podcasts from text, audio and video files you give it, and other companies have begun offering similar capabilities. Tools like Suno and Udio create music from written prompts. Google’s assistant technology answers and screens calls on smartphones. AI is making the use of voice so prevalent that one blogger argues that we are returning to “an oral-first culture.”
So now what?
As use of generative AI grows among students, instructors must find ways to reimagine learning. That doesn't mean that everyone should adopt all things AI. As these trends indicate, though, the number of tools (and toys) that technology companies are infusing with AI is growing rapidly. Some of them offer promise for teaching and learning. Others will make cheating on traditional assignments easier and virtually impossible to detect. Adapting our classes will require experimentation, creativity, and patience. At CTE, we have many things planned (and already available) to help with that process, and we will continue to develop materials, provide examples, and help faculty adapt. We see opportunities for productive change, and we encourage instructors to join us.
Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.
How a new Copilot tool might be used in teaching
By Doug Ward
The KU version of Copilot now allows the creation of agents, which means you can customize Copilot and give it instructions on what you want it to do, how you want it to respond, and what format its output should follow.
An agent still uses Copilot’s foundational training, but the instructions can reduce the need for long, complex prompts and speed up tasks you perform regularly. You can also direct the agent to websites you would like it to draw on, and create starter prompts for users.
Copilot has also gained another function: the ability to store prompts for reuse. That isn’t nearly as useful as creating agents, but both additions give users additional control over Copilot and should make it more useful for many faculty members, staff members, and graduate students. (I don’t know whether the new functions are available to undergraduates, but they probably are.)
These features have been available for some time in paid versions of Copilot. What is new is the access available when you use your KU credentials to log in to Copilot, which is Microsoft’s main generative artificial intelligence tool.
Potential and limitations
Agents have the potential to improve the accuracy of responses of Copilot because the directions you provide limit the scope of Copilot’s actions and tailor the tone and substance of those responses. Accuracy also improves if you give Copilot examples and specific material to work with (an uploaded document, for instance).
If you log in with your KU ID, Copilot also has additional layers of data protection. For instance, material you use in Copilot isn’t used for training of large language models. It is also covered by the same privacy protections that KU users have with such tools as Outlook and OneDrive.
In addition to potential, Copilot has several limitations. Those include:
- Customization restrictions. A Copilot agent allows you to provide up to 8,000 characters, or about 1,500 words, of guidance. That guidance is essentially an extended prompt created with natural language, but it includes any examples you provide or specific information you want your agent to draw on. The 8,000 characters may seem substantial, but that count dwindles quickly if you provide examples and specific instructions.
- Input restrictions. Once you create an agent, Copilot also has an input limit of 8,000 characters. That includes a prompt and whatever material you want Copilot to work with. If you have given your agent substantial instructions, you shouldn’t need much of a prompt, so you should be able to upload a document of about 1,500 words, a spreadsheet with 800 cells, or a PowerPoint file with eight to 16 slides. (Those are just estimates.) The limit on code files will vary depending on the language and the volume of documentation and comments. For instance, Python, Java and HTML will use up the character count more quickly. The upshot is that you can’t use a Copilot agent to analyze long, complex material – at least in the version we have at KU. (The 8,000-character limit is the same whether you use an agent or use a prompt with Copilot itself.)
- Limit in scope. Tools like NotebookLM allow you to analyze dozens of documents at once. I haven’t found a way to do that with a Copilot agent. Similarly, I haven’t found a way to create a serial analysis of materials. For instance, there’s no way to give Copilot several documents and ask it to provide individual feedback on each. You have to load one document at a time, and each document must fall within the limits I list above.
- Potential fabrication. The guidance you provide to a Copilot agent doesn’t eliminate the risk of fabrication. All material created by generative AI models may include fabricated material and fabricated sources. They also have inherent biases because of the way they are trained. It is crucial to examine all AI output closely. Ultimately, anything you create or do with generative AI is only as good as your critical evaluation of that material.
An example of what you might do
I have been working with the Kansas Law Enforcement Training Center, a branch of KU that provides training for officers across the state. It is located near Hutchinson.
One component of the center’s training involves guiding officers in writing case reports. Those reports provide brief accounts of crimes or interactions an officer has after being dispatched. They are intended to be factual and accurate. At the training center, officers write practice reports, and center staff members provide feedback. This often involves dozens of reports at a time, and the staff wanted to see whether generative AI could help with the process.
Officers have the same challenges as all writers: spelling, punctuation, grammar, consistency, and other structural issues. Those issues provided the basis for a Copilot agent I created. That agent allows the staff to upload a paper and, with a short prompt, have Copilot generate feedback. A shareable link allows any of the staff members to use the agent, improving the consistency of feedback. The agent is still in experimental stages, but it has the potential to save the staff many hours they can use for interacting with officers or working with other aspects of training. It should also allow them to provide feedback much more quickly.
Importantly, the Copilot agent keeps the staff member in control. It creates a draft that the staff member can edit or expand on before providing feedback to the officer. That is, Copilot provides a starting point, but the staff members must draw on their own expertise to evaluate that output and decide what would be useful to the officer.
Other potential uses
If you aren’t sure whether you could use a Copilot agent in your teaching-related work, consider how you might use a personal assistant who helps with your class. What areas do students struggle with? What do they need help with when you aren’t available? What do they need more practice with? How can you help students brainstorm and refine ideas for projects and papers? What aspects of your class need to be re-envisioned? What tasks might you give an assistant to free up your time?
For instance, a CTE graduate fellow hopes to create an agent to help students learn MLA and APA style. I have written previously about how Copilot can be used as a coach for research projects. Many faculty members at the University of Sydney have created agents for such tasks as tutoring, skill development, and feedback to students. Their agents have been used to help students in large classes prepare for exams; help faculty create case studies and provide feedback on student work; help students troubleshoot problems, improve grammar skills, practice interviewing, better understand lecture content, create research proposals, and get answers to general questions about a class when an instructor isn’t available. Those faculty members are in fields such as biology, occupational therapy, biochemistry, education, social work, psychology, nursing, and journalism.
Some of the examples at the University of Sydney may be difficult for KU faculty to emulate because Sydney has a custom-built system called Cogniti. That system uses Copilot agents but has more sophisticated tools than KU has. Microsoft has also created many types of agents. As with the examples from Sydney, some are beyond the capabilities of the system we have access to at KU, but they can give you a sense of what is possible.
If you decide to create your own agent, I explain in a separate article and video how you can do that. My goal is to help instructors explore ways to use generative artificial intelligence proactively rather than feel like they are constantly fighting against its misuse. If nothing else, creating guidance for an agent can help you better articulate steps students can take to improve their learning and identify areas of your class you might want to improve.
Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.
Surveys suggest a steep, rocky hill ahead for education's adaptation to AI
By Doug Ward
Adapting colleges and universities to generative artificial intelligence was never going to be easy. Surveys released over the past two weeks provide evidence of just how difficult that adaptation will be, though.
Here’s a summary of what I'm seeing in the results:
Faculty: We lack the time, understanding, and resources to revamp classes to an AI age. A few of us have been experimenting, but many of us don’t see a need to change.
Administrators: We think generative AI will allow our institutions to customize learning and improve students' research skills, but we need to make substantial changes in teaching. We are spending at least some time and money on AI, but most of our institutions have been slow to adapt and aren’t prepared to help faculty and staff gain the skills and access to the tools they need.
That’s oversimplified, but it captures some of the broad themes in the surveys, suggesting (at least to me) a rocky path over the coming years. And though the challenges are real, we can find ways forward. From what I'm seeing in the surveys, we need to help instructors gain experience with generative AI, encourage experimentation, and share successes. We also need to do a better job of defining AI, generative AI, and use of AI, especially in class policies and institutional guidance. The surveys suggest considerable confusion. They also suggest a need to move quickly to help students gain a better understanding of what generative AI is, how it can be used effectively, and why it has many ethical challenges associated with it. In most cases, that will require a rethinking of how and what we teach. We have provided considerable guidance on the CTE website, and we will continue to explore options this spring.
Some of the more specific results from the surveys can help guide us toward the areas that need attention.

U.S. educators see AI differently from their global counterparts
Faculty in the United States and Canada view generative AI in a far gloomier way than their colleagues in other countries, a survey from the Digital Education Council suggests. They are far more likely to say that generative AI is a challenge and that they will not use it in their teaching in the future.
Worldwide, 35% of the survey’s respondents said generative AI was a challenge to education and 65% said it was an opportunity. Regionally, though, there were considerable differences, with 43% of faculty in the U.S. and Canada calling AI a challenge compared with 35% in Europe, the Middle East, and Africa; 30% in the Asia Pacific region, and 22% in Latin America.
Similarly, a much greater percentage of faculty in the U.S. and Canada said they did not expect to use AI in their teaching in the future. Looked at another way, 90% to 96% of faculty in other regions of the world said they expected to integrate AI into their classes, compared with 76% in the U.S. and Canada.
Alessandro Di Lullo, chief executive of the Digital Education Council, said in a briefing before the survey results were released that faculty skepticism in the U.S. and Canada surprised him. Historically, he said, instructors in both countries have had “propensity towards innovation and more openness towards innovation.”

AAC&U survey suggests need but little momentum
A survey released this week by the Association of American Colleges and Universities and Elon University offered a similarly sober assessment of U.S. higher education’s handling of generative AI. That survey included only university leaders, with large percentages saying their institutions weren’t prepared to help faculty, students, or staff work with generative AI even though they anticipate a need for substantial change.
Leaders of small colleges and universities expressed more concern than those at larger institutions. Eighty-seven percent of leaders at small institutions (those with fewer than 3,000 students) said that preparing faculty to guide students on AI was a key challenge, compared with 51% to 54% at larger institutions. Leaders said the biggest challenges included faculty’s lack of familiarity with – and resistance to – generative AI tools; worries that AI will diminish student learning; lack of training and infrastructure to handle generative AI; and security.
“Use of these tools is an attack on everything we do,” one leader said in the survey.
Most leaders said they were concerned about academic integrity, student reliance on AI tools, and digital inequities, but they also said generative AI would enhance learning and improve student skills in research and writing, along with creativity. Among leaders at institutions with 10,000 or more students, 60% said they expected the teaching model to change significantly in the next five years to adapt to generative AI.
Most leaders see a need for some immediate changes, with 65% saying that last year's graduates were not prepared to work in jobs that require skills in generative AI.
Figuring out the role of generative AI in teaching
In the Digital Education Council survey, 61% of faculty respondents said they had used generative AI in their teaching, although most reported minimal to moderate use, primarily for creating class material but also for completing administrative tasks, helping students learn about generative AI, engaging students in class, trying to detect cheating, and generating feedback for students.
Of the 39% of respondents who said they didn’t use generative AI, reasons included lack of time, uncertainty about how to use it in teaching, and concern about risks. Nearly a quarter said they saw no clear benefits of using generative AI.
That tracks with what I have seen among faculty at KU and at other universities. Many see a need for change but aren't sure how to proceed. Most have also struggled with how to maintain student learning now that generative AI can be used to complete assignments they have developed over several years.
Danny Bielik, president of Digital Education Council, said in a briefing that administrators needed to understand that many instructors were struggling to see the relevance of generative AI in their teaching.
“It's a wake-up call and a reminder to institutional leadership that these people exist, they're real, and they also need to be brought along for the journey if institutions are starting to make decisions,” Bielik said.

'The role of humans is changing'
Other elements of the survey tracked along familiar lines:
- Views of AI. 57% of respondents said they had a positive view of AI in education and 13% had a negative view. The rest were somewhere in between.
- Roles of instructors. 64% said they expected the roles of instructors to change significantly because of generative AI; 9% expected minimal or no change. Relatedly, 51% said AI was not a threat to their role as an instructor, and 18% said their role was threatened. Those who considered themselves more proficient with generative AI were more likely to say that teaching would need to adapt.
- AI as a skill. Two-thirds of respondents said it was important to help students learn about generative AI for future jobs. Even so, 83% said they were concerned about students’ ability to evaluate the output of chatbots, with a similar percentage saying they worried about students becoming too reliant on AI.
- Use of AI in class: 57% of faculty surveyed said they allowed students to use generative AI on assignments as long as they followed instructor stipulations and disclosed its use; 23% said no AI use was permitted, and 11% said AI use was mandatory.
Di Lullo said he was surprised by some of the results, especially because “the role of humans is changing” and colleges and universities need to adapt.
Bielik said the survey results were a “very good indication that there are people not necessarily sitting on the fence, but they're not paying as much attention to it as we are.”
Yet another recent poll supports that observation. Just a few days after the Digital Education Council survey was released, a Gallup poll said that that nearly two-thirds of Americans didn't realize they were already using AI-infused technology. That technology includes such things as assistant software like Siri and Alexa, navigation software, weather apps, social media, video streaming, and online shopping. Overall, Gallup said, Americans tend to see generative AI in negative terms, with young adults (age 18 to 29) expressing the highest percentage of concern about its impact. Three-fourths of young adults said they were especially worried about job prospects as use of generative AI grows. Those of all ages who know about AI’s integration into existing technology view it more positively.
As we rethink our teaching, we need to build community and trust among students and encourage them to help us find a way forward. We also need to help students understand how the skills they gain in college will help them become more adaptable to change. First, though, we need to adapt ourselves.

***********************************
Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.
Some thoughts about generative AI as the semester starts
By Doug Ward
The shock has worn off, but the questions about how to handle generative artificial intelligence in teaching and learning seem only to grow.
Those questions lack easy answers, but there are concrete steps you can take as we head into the third year of a ChatGPT world:
- Create a clear policy about generative AI use in your class.
- Talk with students about generative AI frequently. Encourage questions.
- Talk frequently about the skills students gain in your class. Explain why those skills are important and how students can use them. Do this early in the semester and then with each assignment.
- Build community and trust in your classes. Student use of generative AI is a symptom of underlying issues of trust, perceptions of value, and social pressures, among other things.
- Create assignments that help students explore generative AI. You don't have to like or promote generative AI, but students need to understand its strengths and weaknesses, and how to approach its output with a critical eye.
- Experiment with generative AI yourself and consider how it is – or might – change your discipline and your teaching.
That’s just a start. As I said, the questions about generative AI keep piling up. Here are a few additional updates, thoughts, and observations.
What is the university doing with AI?
Several things have been taking place, and there are many opportunities to learn more about generative AI.
- AI Task Force. A task force that includes members of the Lawrence and medical school campuses began work in the fall. It will make recommendations on how the university might approach generative AI. It will then be up to university leaders and faculty and university governance to decide what types of policies (if any) to pursue.
- Faculty Senate and University Senate. Both governance bodies have had discussions about generative AI, but no formal policies have emerged.
- University bot. The university has contracted with a vendor to provide a chatbot for the KU website. The bot is still being developed, but vendor interviews focused on such uses as interacting with prospective students, responding to text queries from students, providing reminders to students, and answering questions related to IT and enrollment management.
- AI in Teaching Working Group. This group, through the Center for Teaching Excellence, meets monthly online, and it has a related Teams site. If you are interested in joining either, email Doug Ward (dbward@ku.edu).
- AI think tank. Lisa Dieker (lisa.dieker@ku.edu) has organized the AI Tech User Think Tank through the FLITE Center in the School of Education and Human Sciences. It is intended primarily for connecting faculty interested in AI-related grant work and research, but meetings cover many types of AI-related issues. Contact her if you are interested in joining.
- Digital Education Council. The School of Education and Human Sciences has joined the Digital Education Council, an international group of universities and corporations focused on collaborative innovation and technology. Much of the group’s recent work has focused on use of generative AI in education and industry.
- Libraries AI discussion group. The KU Libraries staff has been actively exploring how generative AI might change the way people search, find, and use information. A Teams discussion site has been part of that. Most conversations are, of course, library related, but participants often share general information about AI or about library trials.
- CTE AI course. CTE has made AI-related modules available for instructors to copy, use, or adapt in their own courses. The modules cover such areas as how generative AI works, why it creates many ethical quandaries, how it can be used ethically, and what the future of AI might entail. Anyone interested in gaining access to the modules should email Doug Ward (dbward@ku.edu).
What about a policy for classes?
The university has no policy related to AI use in classes, and we know of no policy at the school level, either. That means it is crucial for instructors to talk with students about expectations on AI use and to include syllabus information about use of, or prohibitions on, generative AI.
We can’t emphasize that enough: Talk with students about generative AI. Encourage them to ask questions. Make it clear that you welcome those questions. No matter your policy on use of generative AI, help students understand what skills they will gain from your class and from each assignment. (See Maintaining academic integrity in the AI era.)
What are we hearing about AI use among students?
Students have been conflicted about generative AI. Some see use of it as cheating. Some view the training of generative AI on copyrighted material as theft of intellectual property. Some worry about privacy and bias. Others worry about AI’s environmental impact.
Even so, large percentages of students say they use generative AI in their coursework, even if instructors ask them not to. They expect faculty to adapt to generative AI and to help them learn how to use it in jobs and careers. For the most part, that hasn’t happened, though.
Most students welcome the opportunity to talk about generative AI, but many are reluctant to do so out of fear that instructors will accuse them of cheating. That has to change. Only by engaging students in discussions about generative AI can we find a way forward.
Why are so many students using generative AI?
Many instructors assume students are lazy and want to cheat. The reality is far more complex. Yes, some avoid the hard work of learning. Most, though, use generative AI for other reasons, which include the following:
- Students feel unprepared. Many students struggled during the pandemic. Expectations of them diminished, and many never gained the core reading, writing, math, and analytical skills they need in college. College requirements and expectations have largely remained the same, though, with students unsure how to cope. Generative AI has become a way to make up for shortcomings.
- They feel overwhelmed. Some students have families or other obligations, many work 20 or more hours a week, and most still feel lingering effects from the pandemic. Anxiety, depression, and related mental health issues have increased. That mix pushes many students to take shortcuts just to get by.
- They feel pressured to achieve high GPAs. Scholarships often require a 3.5 GPA or higher, and students who want to attend graduate school or medical school feel a need to maintain high GPAs. That can push them toward AI use if they fear falling below whatever benchmark they have set for themselves or that others have imposed on them.
- They lack skills in time management. Students who wait until the last minute to study or to complete assignments create unnecessary stress for themselves. They also find out that assignments can’t be completed at the last minute, and they turn to AI for help.
- They worry about job expectations. Students have been getting mixed messages about generative AI. Some instructors denounce it and see any use of it as cheating. At the same time, many employers say they expect graduates to know how to use it. Current students are especially job-oriented. Depending on what they hear and read, they may see experience with generative AI as more important than skills they would gain by doing coursework themselves.
- They see a degree as a consumer product. As the cost of college has increased, many students have started looking at a degree in transactional terms. A degree is simply a means to a job. They are paying a lot of money, the reasoning goes, and that should give them the right to use whatever tools they want to use and to approach class in whatever way helps them succeed.
- They don’t see value in an assignment or class. This is a big aspect of most types of academic misconduct. Most students want to learn, but they don’t always understand why they must take particular classes or complete some assignments. If students don’t see value in an assignment or a class, they may just turn over any work to generative AI.
- Generative AI is ubiquitous. We have all been using some form of AI for years, and generative AI is being added to most software. It often seems impossible to avoid, and many students already see it as just another tool at their disposal.
None of that should be seen as an excuse to cheat, but instructors need to keep it in mind as they structure their classes. Here are some strategies they can use:
- Never assume students have particular skills. Even if they had these skills in the past, they may not today. Provide opportunities for review or for practicing and learning crucial skills.
- Scaffold assignments. Assign papers or projects in smaller segments so that students can get feedback and revise their work. Avoid high-stakes tests and use low-stakes quizzes and exams instead.
- Make expectations transparent. Explain the skills students need to succeed and the expectations you have for successful completion of assignments. Provide examples. Provide resources for reviewing or learning expected skills. Encourage students to talk with you privately if they feel ill-prepared to handle an assignment.
- Provide flexibility. This might involve dropping the lowest grades in segments of coursework, giving students, say, two opportunities to turn in work a few days after a deadline, or leniency if they communicate with you about challenges they are having. Most students still need structure, but flexibility will diminish pressure on them and on you.
Can’t detectors solve problems of student AI use?
Results from AI detectors are not proof that a student has cheated. Detection tools have proved unreliable, and students can fool them with minimal effort. Research has been consistent that AI detectors are far more likely to flag the work of students for whom English is not a first language. Because of those problems, many universities have disabled AI detectors. Faculty at KU have access to a Turnitin detector. If you use it, use the results as information for talking with students.
What are you hearing from the Generative AI in Teaching Working Group?
Discussions have been robust and energizing, covering such topics as creating policies about student use of generative AI, integrating generative AI into courses, finding and experimenting with news tools, helping students learn about the pitfalls and biases of AI tools, and finding ways to expand learning with generative AI. Members share news from conferences, colleagues, readings, and research. They offer ideas, insights, recommendations, and support. Here are a few things that come up frequently.
- More instructors need to experiment. Most members of the working group have been experimenting with generative AI tools and with ways to integrate them into teaching and learning. That can sometimes feel isolating, especially if colleagues are unwilling to learn about generative AI or to acknowledge student use of it.
- We need access to tools. Many people who want to explore feel held back by lack of access to appropriate tools and by the university’s slow response to generative AI.
- We need time and resources. Experimenting with generative AI takes time, and integrating it into classes takes more time, especially because many long-used approaches need to be rethought.
- We need to help students learn about generative AI. Most are uninformed and use generative AI with little or no understanding of how to prompt effectively, how to scrutinize the output of chatbots, or how to choose appropriate tools. Students need to know about how businesses are using AI tools and how to prompt chatbots effectively. The university needs courses that delve into the ethical aspects of generative AI, including privacy, safety, and appropriate use. Existing classes also need to integrate these topics.
Will the students of the near future be able to get a meaningful education without AI assistance?
Absolutely. The human aspects of education remain as crucial as ever. A college education provides access to a community of learners, interaction with professors and peers, support from experienced staff and advisors, and an opportunity to explore the outer and inner worlds with a diverse group of people.
But ...
We are still trying to figure out how and where generative AI fits into human-centered learning. People often associate generative AI with ChatGPT, but thousands of specialty tools have been developed. Many of those tools show promise in education, including those that adapt to individual students, provide opportunities to practice skills, provide feedback and direction, and allow people to explore topics they might never have considered before. We have only scratched the surface on how generative AI might transform teaching and learning.
One thing is clear: Students who hand off the challenging aspects of learning to generative AI will eventually fail. Learning takes work. It’s hard. It takes time and practice. Generative AI may be able to assist at times, but it can’t learn for students.
************************
Doug Ward is associate director of the Center for Teaching Excellence and an associate professor journalism and mass communications.
How K-12 education connects to AI literacy in college
By Doug Ward
Kansas ranks near the bottom in the percentage of schools offering foundational computer science education, according to a study by Code.org, the Computer Science Teacher Association, and the Expanding Computing Education Pathways Alliance.
Nationwide, 57.5% of schools offered a computer science class in 2023. Kansas was more than 20 percentage points below that average, with 36% of schools offering a foundational course. Only three states had lower percentages: Louisiana (35%), Montana (34%) and Minnesota (28%).
That has important implications for higher education. Many Kansas students who attend KU may have little understanding of how generative artificial intelligence and the large language models behind it work. That puts them at a disadvantage in understanding how to use generative AI effectively and how to approach it critically. Computer science courses aren't the only way students can learn about generative AI, but a growing number of states see those courses as crucial to the future.
Shuchi Grover, director of AI and education research at Looking Glass Ventures, delved into that at a recent speech at the National Academies of Sciences, Engineering, and Medicine.
“You want children to be equipped with understanding the world they live in,” Grover said. “Think about how much technology is all around them. Is it wise to completely leave them in the dark about what computing and AI is about?”

More than 10,000 schools nationwide do not offer a computer science course, the Code.org report says. Not surprisingly, schools with 500 students or fewer are the least likely to offer such a course, as are rural schools (which are often the same). The report noted a disparity in access for students of color, students with disabilities, and students who come from low-income families. Young women represented only 31% of students enrolled in foundational computer science courses.
Like Grover, the authors of the Code.org study make a compelling point about the connection between computer science and generative AI. The report says (in bold): “We cannot prepare students for a future with AI without teaching them the foundations of computer science.”
I'm all in favor of teaching digital literacy, computer literacy, and AI literacy. Students can learn those skills in many ways, though. Requiring a computer science seems less important than providing opportunities for students to explore computer science and improve their understanding of the digital world.
Efficiency vs. creativity
A couple of other elements of Grover’s talk at the National Academies are worth noting.
An audience member said that generative AI was generally portrayed in one of two ways: using it to do existing things better (efficiency) or to approach new problems in new ways (“to do better things”). Most studies have focused on efficiency, he said, to the exclusion of how we might apply generative AI to global challenges.
Grover said that she thought we definitely needed to focus on bigger issues. Efficiency has a role, though.
“This idea of efficiency in the school system is fraught,” Grover said. “Time fills up no matter how many efficiency tools you give them. And I think it’s unfair. Teachers all over the world, especially in the U.S. and I also see in India, are so overworked. ... I think it’s good that AI can help them with productivity and doing some of that drudgery – you know, the work that just fills up too much time – and take that off their plate.”
Schools in the United States have been slow to respond to generative AI, she said, because the system is so decentralized. Before the use and understanding of generative AI can spread, she said, “a teacher has to be able to use it and has to be able to see value.”
That will require listening.
“I think we need to listen to teachers – a lot. And maybe there’s something we can learn about where we need to focus our efforts. … Teachers need to have a voice in this – a big voice.”
Briefly …
Cheap AI ‘video scraping’ can now extract data from any screen recording, by Benj Edwards. Ars Technica (17 October 2024).
Stanford Researchers Use AI to Simulate Clinical Reasoning, by Abby Sourwine. Government Technology (10 October 2024).
Forget chat. AI that can hear, see and click is already here, by Melissa Heikkilä. MIT Technology Review (8 October 2024).
Colleges begin to reimagine learning in an AI world, by Beth McMurtrie. Chronicle of Higher Education (3 October 2024).
Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating, by Benj Edwards. Ars Technica (20 September 2024).
United Nations wants to treat AI with the same urgency as climate change, by Will Knight, Wired, via Ars Technica (20 September 2024).
Where might AI lead us? An analogy offers one possibility
By Doug Ward
As I prepared to speak to undergraduates about generative artificial intelligence last October, I struggled with analogies to explain large language models.
Those models are central to the abilities of generative AI. They have analyzed billions of words, billions of lines of code, and hundreds of millions of images. That training allows them to predict sequences of words, generate computer code and images, and create coherent narratives at speeds humans cannot match. Even programmers don’t fully understand why large language models do what they do, though.
So how could I explain those models for an audience of novices?
The path I took in creating an analogy illustrates the strengths and weaknesses of generative AI. It also illustrates a scenario that is likely to become increasingly common in the future: similar ideas developed and shared simultaneously. As those similar ideas emerge in many places at once, the role of individuals in developing those ideas will also grow increasingly important – through understanding of writing, coding, visual communication, context, and humanity.

Getting input from generative AI
In my quest for an analogy last fall, I turned to Microsoft Copilot for help. I prompted Copilot to act as an expert in computer programming and large language models and to explain how those models work. My audience was university undergraduates, and I asked for an analogy to help non-experts better understand what goes on behind the scenes as generative AI processes requests. Copilot gave me this:
Generative AI is like a chef that uses knowledge from a vast array of recipes to create entirely new and unique dishes. Each dish is influenced by past knowledge but is a fresh creation designed to satisfy a specific request or prompt.
I liked that and decided to adapt it. I used the generative tool Dall-E to create images of a generative AI cookbook, a chef in a futuristic kitchen, and food displayed on computer-chip plates. I also created explanations for the steps my large language model chef takes in creating generative dishes.
How a large language model chef works
Within this post, you will see the images I generated. Here’s the text I used (again modified from Copilot’s output):
A chef memorizes an enormous cookbook (a dataset) so that it knows how ingredients (words, images, code) are usually put together.
Someone asks for a particular dish with special ingredients (a prompt), so the chef creates something new based on everything it has memorized from the cookbook.
The chef tastes the creation and makes sure it follows guidance from the cookbook.
Once the chef is satisfied, it arranges the creation on a plate for serving. (With generative AI, this might be words, images or code.)
The chef’s patrons taste the food and provide feedback. The chef makes adjustments and sends the dish back to patrons. The chef also remembers patrons’ responses and the revisions to the dish so that next time the dish can be improved.

A striking similarity
I explain all that because I came across the same analogy in Ethan Mollick’s book Co-intelligence. Mollick is a professor at the University of Pennsylvania whose newsletter and other writings have been must-reads over the past two years because of his experimentations with generative AI, his early access to new tools, and his connections to the AI industry.
In the first chapter of Co-intelligence, Mollick provides some history of AI development and the transformer technology and neural networks that make generative AI possible. He then explains the workings of large language models, writing:
Imagine an LLM as a diligent apprentice chef who aspires to become a master chef. To learn the culinary arts, the apprentice starts by reading and studying a vast collection of recipes from around the world. Each recipe represents a piece of text with various ingredients symbolizing words and phrases.The goal of the apprentice is to understand how to combine different ingredients (words) to create a delicious dish (coherent text).
In developing that analogy, Mollick goes into much more detail than I did and applies well-crafted nuance. The same analogy that helped me explain large language models to undergraduates, though, helped Mollick explain those models to a broader, more diverse audience. Our analogies had another similarity: They emerged independently from the same tool (presumably Microsoft Copilot) about the same time (mid- to late 2023).
Why does this matter?
I don’t know for certain that Mollick’s analogy originated in Copilot, but it seems likely given his openness about using Copilot and other generative AI tools to assist in writing, coding, and analysis. He requires use of generative AI in his entrepreneurship classes, and he writes frequently about his experimentations. In the acknowledgements of his book, he gives a lighthearted nod to generative AI, writing:
And because AI is not a person but a tool, I will not be thanking any LLMs that played a role in the creation of this book, any more than I would thank Microsoft Word. At the same time, in case some super-intelligent future AI is reading these words, I would like to acknowledge that AI is extremely helpful and should remember to be kind to the humans who created it (and especially to the ones who wrote books about it).
It was a nice non-credit that acknowledged the growing role of generative AI in human society.
I understand why many people use generative AI for writing. Good writing takes time, and generative AI can speed up the process. As Mollick said, it’s a tool. As with any new tool, we are still getting used to how it works, what it can do, and when we should use it. We are grappling with the proprieties of its use, the ethical implications, and the potential impact on how we work and think. (I’m purposely avoiding the impact on education; you will find much more of that in my other writings about AI.)
I generally don’t use generative AI for writing, although I occasionally draw on it for examples (as I did with the presentation) and outlines for reports and similar documents. That’s a matter of choice but also habit. I have been a writer and editor my entire adult life. It’s who I am. I trust my instincts and my experience. I’m also a better writer than any generative AI system – at least for now.
I see no problem in the example that Mollick and I created independently, though. The AI tool offered a suggestion when we needed one and allowed us to better inform our respective audiences. It just happened to create similar examples. It was up to us to decide how – or whether – to use them.

Where to now?
Generative AI systems work by prediction, with some randomness. The advice and ideas will be slightly different for each person and each use. Even so, the systems’ training and algorithms hew toward the mean. That is, the writing they produce follows patterns the large language model identifies as the most common and most likely based on what millions of people have written in the past. That’s good in that the writing follows structural and grammatical norms that help us communicate. It is also a central reason generative AI has become so widely used in the past two years, with AI drawing on norms that have helped millions of people improve their writing. The downside is that the generated writing often has a generic tone, devoid of voice and inflection.
Research suggests that the same thing happens with ideas generative AI provides. For example, a study in Science Advances suggests that generative AI can improve creativity in writing but that stories in which writers use generative AI for ideas have a sameness to them. The authors suggest that overuse of generative AI could eventually lead to a generic quality in AI-supported stories.
My takeaway is that use of generative AI in writing comes with a cognitive and creative cost. We may get better writing, and research so far suggests that the weakest writers benefit the most from AI’s advice. Other research suggests that use of generative AI can make writing more enjoyable for weaker writers. On the other hand, a recent study suggests that human-written work is still perceived as superior to that produced by generative AI.
Mollick argues that generative AI can be an excellent partner in writing, coding, and creative work, providing a nudge, pointing the way or reassuring us in tasks that inevitably lead to inspirational lulls, dead ends, and uncertainty. The title of his book, Co-intelligence, represents his assertion that AI can augment what we do but that we, as humans, are still in control.
That control means that writing with a strong voice and uniquely human perspective still stands out from the crowd, as do ideas that push boundaries. Even so, I expect to see similar ideas and analogies emerging more frequently from different people in different places and shared simultaneously. That will no doubt lead to conflicts and accusations. As generative AI points us toward similar ideas, though, the role of individuals will also grow increasingly important. That is, what generative AI produces will be less significant than how individuals shape that output.
How Wall Street deals reach into classes
By Doug Ward
Canvas will soon be absorbed by KKR, one of the world’s largest investment firms.
That is unlikely to have any immediate effect on Canvas users. The longer-term effects – and costs – are impossible to predict, though.
Instructure, the company behind Canvas, has agreed to be acquired by KKR for $4.8 billion. KKR and similar companies have a reputation of laying off employees and cutting salaries and other expenses at companies they acquire. The investment firms look at it another way: They simply increase efficiency and make companies healthier.
KKR also owns TeachingStrategies, an online platform for early childhood education. Earlier this year, it acquired the publisher Simon & Schuster. It also owns such companies as Doordash, Natural Pet Food, the augmented reality company Magic Leap, and OverDrive, which provides e-books and audio books to libraries. (The Lawrence Public Library uses OverDrive’s Libby platform.)
The acquisition of Instructure occurred on the same week that the online program manager 2U filed for bankruptcy protection. The company was valued at $5.8 billion in 2018, according to The Chronicle of Higher Education, but its finances faded as institutions began to rethink agreements in which the company, like similar providers, took 50% or more of tuition dollars from online classes.
The acquisition and the bankruptcy are reminders of how connected education and learning are to the world of high finance. Even as institutions struggle to make ends meet, they spend millions of dollars on technology for such things as learning management systems, online tools, online providers, communication, video and audio production, internet connection, wifi, tools for daily tasks like writing and planning, and a host of services that have become all but invisible.
A multi-billion-dollar market
By one account, education technology companies raised $2.8 billion in funding last year. That doesn’t include $500 million that Apollo Funds invested in the publisher Cengage. The total is down substantially from 2021 and 2022, when investors put more than $13 billion into education technology companies, according to Reach Capital, an investment firm that focuses on education. That bump in financing took place as schools, colleges, and universities used an infusion of government pandemic funds to buy additional technology services.
None of that is necessarily bad. We need start-up companies with good ideas, and we need healthy companies to provide technology services. Those tools allow educators to reach beyond the classroom and allow the steady functioning of institutions. They also make education, which rarely tries to create its own technology, a captive audience for companies that provide technology services.
The companies have used various strategies to try to gain a foothold at colleges and universities. Over the past decade, many have provided free access to instructors who adopt digital tools for classes. Students then pay for those services by the semester. That charge may seem trivial, but students rarely know about it before they begin classes, and even a small additional fee can create financial hardship for some.
The university pays for tools like Canvas, drawing on money from tuition and fees and a dwindling contribution from the state. That makes the individual costs cheaper by spreading them among a larger body of users and making costs to students more transparent. It also commits the university to tens or hundreds of thousands of dollars in spending each year – money that investment firms like KKR see as well worth the investment in companies like Instructure.
What is the point of higher education?
By Doug Ward
The future of colleges and universities is neither clear nor certain.
The current model fails far too many students, and creating a better one will require sometimes painful change. As I’ve written before, though, many of us have approached change with a sense of urgency, providing ideas for the future for a university that will better serve students and student learning.
The accompanying video is based on a presentation I gave at a recent Red Hot Research session at KU about the future of the university. It synthesizes many ideas I’ve written about in Bloom’s Sixth, elaborates on a recent post about the university climate study, and builds on ideas I explored in an essay for Inside Higher Ed.
The takeaway: We simply must value innovative teaching and meaningful service in the university rewards system if we have any hope of effecting change. Research is important, but not to the exclusion of our undergraduate students.
Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.
KU to receive a third of $120 million in federal earmarks going to higher ed in Kansas
By Doug Ward
Colleges and universities in Kansas will receive more than $100 million this year from congressional earmarks in the federal budget, according to an analysis by Inside Higher Ed.
That places Kansas second among states in the amount earmarked for higher education, according to Inside Higher Ed. Those statistics don't include $22 million for the Kansas National Security Innovation Center on West Campus, though. When those funds are added, Kansas ranks first in the amount of earmarks for higher education ($120.8 million), followed by Arkansas ($106 million), and Mississippi ($92.4 million).
KU will receive more than a third of the money flowing to Kansas.That includes $1.6 million for a new Veterans Legal Support Clinic at the law school, and $10 million each for facilities and equipment at the KU Medical Center and the KU Hospital.
Nationwide, 707 projects at 483 colleges and universities will receive $1.3 billion this year through earmarks, Inside Higher Ed said. In Kansas, the money will go to 17 projects, with some receiving funds through multiple earmarks.
All but three of the earmarks for Kansas higher education projects were added by Sen. Jerry Moran. Rep. Jake LaTurner earmarked nearly $3 million each for projects at Kansas City Kansas Community College and Tabor College in Hillsboro, and Rep. Sharice Davids earmarked $150,000 for training vehicles for the Johnson County Regional Police Academy.
Kansas State’s Salina campus will receive $33.5 million for an aerospace training and innovation hub. K-State’s main campus will receive an additional $7 million, mostly for the National Bio and Agro-Defense Facility.
Pittsburg State will receive $5 million for a STEM ecosystem project, and Fort Hays State will receive $3 million for what is listed simply as equipment and technology. Four private colleges will share more than $7 million for various projects, and community colleges will receive $5.6 million.
2024 federal earmarks for higher education in Kansas
Institution | $ amount | Purpose |
K-State Salina | 28,000,000 | Aerospace training and innovation hub |
KU | 22,000,000 | Kansas National Security Innovation Center |
Wichita State | 10,000,000 | National Institute for Aviation Research tech and equipment |
KU Medical Center | 10,000,000 | Cancer center facilities and equipment |
KU Hospital | 10,000,000 | Facilities and equipment |
Wichita State | 5,000,000 | National Institute for Aviation Research tech and equipment |
Pittsburg State | 5,000,000 | STEM ecosystem |
K-State Salina | 4,000,000 | Equipment for aerospace hub |
K-State | 4,000,000 | Facilities and equipment for biomanufacturing training and education |
Fort Hays State | 3,000,000 | Equipment and technology |
K-State | 3,000,000 | Equipment and facilities |
KCK Community College | 2,986,469 | Downtown community education center dual enrollment program |
Tabor College | 2,858,520 | Central Kansas Business Studies and Entrepreneurial Center |
McPherson College | 2,100,000 | Health care education, equipment, and technology |
KU | 1,600,000 | Veterans Legal Support Clinic |
K-State Salina | 1,500,000 | Flight simulator |
Newman University | 1,200,000 | Agribusiness education, equipment, and support |
Seward County Community College | 1,200,000 | Equipment and technology |
Benedictine College | 1,000,000 | Equipment |
Wichita State | 1,000,000 | Campus of Applied Sciences and Technology, aviation education, equipment, technology |
Ottawa University | 900,000 | Equipment |
Cowley County Community College | 264,000 | Welding education and equipment |
Johnson County Community College | 150,000 | Training vehicles for Johnson County Regional Police Academy |
Total | 120,758,989 |
A return of earmarks
Congress stopped earmarks, which are officially known as congressionally directed spending or community project funding, in 2011 amid complaints of misuse. They were revived in 2021 with new rules intended to improve transparency and limit overall spending. They are limited to spending on nonprofits, and local, state, and tribal governments. Earmarks accounted for $12 billion of the $460 billion budget passed in March, according to Marketplace.
Earmarks have long been criticized as wasteful spending and corruption, with one organization issuing an annual Congressional Pig Book Summary (a reference to pork-barrel politics) of how the money is used. Others argue, though, that earmarks are more transparent than other forms of spending because specific projects and their congressional sponsors are made public. They also benefit projects that might otherwise be overlooked, empowering stakeholders to speak directly with congressional leaders and making leaders more aware of local needs.
Without a doubt, though, they are steeped in the federal political process and rely on the clout individual lawmakers have on committees that approve the earmarks. That has put Moran, who has been in the Senate since 2010, in a good position through his seats on the Appropriations Committee, the Commerce Science, and Transportation Committee, and the Veterans Affairs Committee.
What does this mean for higher education?
It’s heartening that higher education in Kansas will see an infusion of more than $100 million in federal funding.
Earmarks generally go to high-profile projects that promise new jobs, that promise new ways of addressing big challenges (security, health care), or that have drawn wide attention (cybercrimes, drones, STEM education). A Brookings Institution analysis found that Republican lawmakers like Moran generally put forth earmarks that have symbolic significance, “emphasizing American imagery and values.” In earmarks for higher education in Kansas over the past two years, that includes things like job training, biotechnology, library renovation, support for veterans, and research into aviation, cancer, alzheimer’s, and manufacturing.
One of the downsides of earmarks, at least in terms of university financial stability, is that they are one-time grants for specific projects and do nothing to address shortfalls in existing college and university budgets or the future budgets for newly created operations. They also require lawmakers who support higher education, who have the political influence to sway spending decisions, and who are willing to work within the existing political structure. For now, at least, that puts Kansas in a good position.
Doug Ward is an associate director at the Center for Teaching Excellence and an associate professor of journalism and mass communications.
Everything you need to know for April Fools' Day
By Doug Ward
A short history lesson:
April Fools’ Day originated in 1920, when Joseph C. McCanles (who was only vaguely related to the infamous 19th-century outlaw gang) ordered the KU marching band (then known as the Beak Brigade) to line up for practice on McCook Field (near the site of the current Great Dismantling).
It was April 1, and McCanles was not aware that the lead piccolo player, Herbert “Growling Dog” McGillicuddy, had conspired with the not-yet-legendary Phog Allen to play a practical joke.
McCanles, standing atop a peach basket, raised his baton and shouted, “March!”
Band members remained in place.
“March!” McCanles ordered again.
The band stood and stared.
Then McGillicuddy began playing “Yankee Doodle” on his piccolo and Allen, disguised in a Beak Brigade uniform, raised a drum stick (the kind associated with a drum, not a turkey) and joined the rest of the band in shouting: “It’s April, fool!”
McCandles fell off his peach basket in laughter, and a tradition was born.
Either that, or April Fools’ Day was created in France, or maybe in ancient Rome, or possibly in India. We still have some checking to do.
Regardless, we at CTE want you to know that we take our April Fools seriously – so seriously in fact that we have published the latest issue of Pupil magazine just in time for April Fools' Day.
As always, Pupil is rock-chalk full of news that you simply must know. It is best read with “The Washington Post March” playing in the background. We don’t like to be overly prescriptive, though, especially with all the strange happenings brought on by an impending solar eclipse.
Doug Ward is an associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.