Scaling Student Assessment Series
Ep. 1 - Scaling Exams and Quizzes
Additional Resources for Scaling Exams
- How would a two-stage exam be described and implemented by its originators? Select to follow link
- What does propagation of learning look like in a two-stage exam? Select to follow link
- Are the positive effects for these exams simply effects of time-on-task? Select to follow link
- What are other examples of "testing effects" that I could capitalize on? Select to follow link
- How can students already in command of the material/skills learn from this? Select to follow link
Ep. 2 - Scaling Mid-Semester Feedback
Additional Resources for Scaling Midterm Feedback
Important Points for Midsemester Feedback
While it’s beneficial to think about responses in the aggregate (done by you, a TA, or perhaps most reliably by AI), be sure to put eyes on each response. In written feedback, students often voice concerns or personal struggles that may need individual attention.
If collecting information live in class, don’t shy away from simple but effective methods of data collection:
- thumbs up, sideways, or down
- green, yellow, and red colored voting cards
- "Please stand if you..."
In a live class, restrict feedback to questions that are comfortable to answer.
There are many live polling platforms for collecting live (and often narrative) feedback. These sometimes offer free subscriptions below a user threshold. Here are just a few: Menti, Slido, Poll Everywhere, Socrative, Kahoot.
Anytime you ask students a question in class, make sure you're comfortable reacting to the responses live in front of them. If you aren't comfortable, consider a different method to collect information.
Ep. 3 - Scaling Student Voice
Ep.4 - Scaling Student Voice - Bonus Episode
Additional Resources for Scaling Student Voice
- CTE resource on grading and creating rubrics Select to follow link
- Important aspects of increasing student voice and choice Select to follow link
- CTE resources from a workshop on Including Student Voices in Assessment Select to follow link
- Engaging students as partners in learning and teaching by Alison Cook-Sather, Catherine Bovill, and Peter Felten Select to follow link
Ep.5 - Scaling Assessment with AI
Guidelines, Resources, and Perspectives: Using AI to Make Sense of Course Evaluations
- You've asked students to write something to you. There's an implicit moral mandate that you--not just AI--read what they write.
- Course evaluations must be anonymous, and this is not always the case. Sometimes students, especially freshmen in large courses, self-identify in their comments. Names must be redacted before using AI.
- Occasionally a student will reach out for help with a personal problem through mechanisms like course evaluations. Such events are low-frequency but high-stakes. Make sure you catch these kinds of comments yourself so you can respond appropriately to the student. (And of course, redact their identifying information before uploading to AI.)
- The reliability of AI is still evolving. At the end of your AI analysis, you should be able to answer the question: Is what I get back from the AI reasonably compatible with my intuitions after reading the comments myself? If not, can I dig deeper to find the discrepancy?
As this Inside Higher Ed article from 2023 self-describes, they detail "the unique challenges that AI presents as well as guidance for management and faculty to effectively engage with artificial intelligence."
In addition to some thought-provoking notions about AI's role in assessment, the section titled "Remember the Human Element" in John Spencer's blog post provides an poignant anecdote that underscores the importance of the instructor's humanity--something AI can't (and shouldn't) replace.
Traditional course evaluation questions can be considered prescriptive, in that they ask specific questions. At the other end, course evaluations often leverage an open-ended section, but this is static. What if there were a way to hear student thoughts on a course and drill deeper in real time? This 2018 article describes such an AI-driven tool.
Ep.6 - Scaling Alternate Grading
Additional Resources for Scaling Alternative Grading Strategies
Alternative grading can take many forms, and each instructor may fashion their own system. However, most strategies fall into just a handful of categories, each with its own strengths and weaknesses. For the unfamiliar and those wishing to retool, the following may be particularly valuable:
- concise overview, including information on ungrading (Harvard)
- thorough background information, including delineation among systems: mastery grading, competency grading, contract grading, and specifications grading (UNL)
- a site with short examples from multiple other institutions (Miami University)
For those wishing for a deeper dive, each of these sites draws on several primary sources and some secondary sources.
Peer evaluation is a strategy that can help with alternative grading in larger courses. An immediate and frequent question raised by instructors: But as non-experts, how good are students at evaluating each other? In small courses, it may be possible to improve peer evaluation skills through direct work with students. In large courses, such intimate coaching may not be feasible.
There are several technologies that help students develop skill in providing meaningful feedback to their student colleagues, and these vary considerably in cost, customizability, function, interface, and more. Here are just a few examples: Aropä, CrowdGrader, Mechanical TA, RiPPLE.