At a meeting of the CTE faculty ambassadors last week, Felix Meschke brought up a challenge almost every instructor faces.
Meschke, an assistant professor of finance, explained that he had invited industry professionals to visit his class last semester and was struck by how engaged students were. They asked good questions, soaked up advice from the professionals, and displayed an affinity for sharing ideas with speakers from outside the university.
The interaction was marvelous to watch, Meschke asked, but how could he assess it? He could ask a question on an exam, he said, but that didn’t seem right. The content of the discussions wasn’t as important as the discussions themselves and the opportunities those discussions brought to students.
In a sense, Meschke had answered his own question: His observations were a form of assessment. I suggested that he log those observations so he could provide documentation if he needed it. No, that wouldn’t provide a numerical assessment, but it would provide the kind of assessment he needed to make decisions on whether to do something similar in the future.
All too often we think of assessment as something we do for someone else: for administrators, for accreditors, for legislators. Assessment is really something we need to do for ourselves, though. Thinking of it that way led to an epiphany for me a few years ago. Like so many educators, I approached assessment with a sense of dread. It was one more thing I didn’t have time for.
When I started thinking of assessment as something that helped me, though, it didn’t seem nearly so onerous. I want to know how students are doing in my classes so I can adapt and help them learn better. I want to know whether to invite back guest speakers. I want to know whether to repeat an assignment or project. I want to know what students report about their own learning. All of those things are natural parts of the teaching process.
That sort of thinking also helped me to realize that assessment doesn’t have to be quantitative. Assessments like quiz and exam grades can indeed point to strengths and weaknesses. If a large majority of students fails an exam, we have to ask why. Was there a problem in the way students learned a particular concept? A flaw in the wording of the exam? A lack of studying by students?
I rarely give exams, though. Rather, I use things like projects, journals and participation.
I use rubrics to grade the projects and journals, but the numbers don’t tell me nearly as much as the substance of the work. Only through a qualitative assessment do I get a sense of what students gained, what they didn’t gain, and what I need to rethink in future semesters.
In the class Meschke described, students applied their learning through active participation. Trying to put a numerical value on that would in some ways cheapen the engagement the students showed and the opportunities they gained in interacting with professionals. Observing those interactions provided excellent feedback to Meschke, though, and by writing a brief summary of his those observations, he could provide documentation for others.
The message was clear: Do it again next semester.
And when it comes to assessment, the message is clear, as well: Do it for yourself.
Additional resources
Portfolio Assessment: An Alternative to Traditional Performance Evaluation Methods in the Area Studies Programs, by Mariya Omelicheva
Assessment Resources for departments and programs at KU
Combining Live Performance and Traditional Assessments to Document Learning, by the School of Pharmacy
Doug Ward is an associate professor of journalism and the associate director of the Center for Teaching Excellence. You can follow him on Twitter @kuediting.