The spread of evidence-based teaching practices highlights a growing paradox: Even as instructors work to evaluate student learning in creative, multidimensional ways, they themselves are generally judged only through student evaluations.
Students should have a voice. As Stephen Benton and William Cashin write in a broad review of research, student evaluations can help faculty members improve their courses and help administrators spot potential problems in the classroom.
The drawback is that too many departments use only student evaluations to judge the effectiveness of instructors, even as they submit faculty research through a multilayered evaluation process internally and externally. Student evaluations are the only university-mandated form of gauging instructors’ teaching, and many departments measure faculty members against a department mean. Those above the mean are generally viewed favorably and those below the mean are seen as a problem. That approach fails to account for the weaknesses in evaluations. For instance, Benton and Cashin and others have found:
- Students tend to give higher scores to instructors in classes they are motivated to take, and in which they do well.
- Instructors who teach large courses and entry-level courses tend to receive lower evaluations than those who teach smaller numbers of students and upper-level courses.
- Evaluation scores tend to be higher in some disciplines (especially humanities) than in others (like STEM).
- Evaluation scores sometimes drop in the first few semesters of a course redesigned for active learning.
- Students have little experience in judging their own learning. As the Stanford professor Carl Wieman writes: “It is impossible for a student (or anyone else) to judge the effectiveness of an instructional practice except by comparing it with others that they have already experienced.”
- Overemphasis on student evaluations often generates cynicism among faculty members about administrators’ belief in the importance of high-quality teaching.
Looked at through that lens, we have not only a need but an obligation to move beyond student evaluations in gauging the effectiveness of teaching. We simply must add dimension and nuance to the process, much as we already do with evaluation of research.
So how do we do that?
At CTE, we have developed a rubric to help departments integrate information from faculty members, peers, and students. Student evaluations are a part of the mix, but only a part. Rather, we have tried to help departments draw on the many facets of teaching into a format that provides a richer, fairer evaluation of instructor effectiveness without adding onerous time burdens to evaluators.
For the most part, this approach uses the types of materials that faculty members already submit and that departments gather independently: syllabi and course schedules; teaching statements; readings, worksheets and other course materials; assignments, projects, test results and other evidence of student learning; faculty reflections on student learning; peer evaluations from team teaching and class visits; and formal discussions about the faculty member’s approach to teaching.
Departments then use the rubric to evaluate that body of work, rewarding faculty members who engage in such approaches as:
- experimenting with innovative teaching techniques
- aligning course content with learning goals
- making effective use of class time
- using research-based teaching practices
- engaging students in hands-on learning rather than simply delivering information to them
- revising course content and design based on evidence and reflection
- mentoring students, and providing evidence of student learning
- sharing their work through presentations, scholarship, committee work and other venues
Departments can easily adapt the rubric to fit particular disciplinary expectations and to weight areas most meaningful to their discipline. We have already received feedback from many faculty members around the university. We’ve also asked a few departments to test the rubric as they evaluate faculty members for promotion and tenure, third-year review, and post-tenure review, and we plan to test it more broadly in the fall.
We will continue to refine the rubric based on the feedback we receive. Like teaching itself, it will be a constant work in progress. We see it as an important step toward making innovative teaching more visible, though, and toward making teaching a more credible and meaningful part of the promotion and tenure process. If you’d like to be part of that, let us know.
****
This article also appears in Teaching Matters, a publication of the Center for Teaching Excellence.
Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.