Student Learning Data Gathering & Analysis

There are many ways to assess student learning. Here are some ideas on how to find assessment data you might already have on hand, as well as some easy-to-implement targets for new assessment processes you might consider.

Back to Assessment of Student Learning

Here are a few examples of where you can look for data on student learning.

 

Student Performance DataStudent Feedback and Self-Assessment

Homework - Link to specific content/skills

Surveys - Students report how they prepare and spend their time, level of challenge, most useful dimensions of class, etc…

Clicker Questions or other In-Class Work – Link to specific. content/skills

Exam Wrappers - After exam students report how they prepared, what they did well, and what they did not so well

Coursework Assignments - Evaluate with rubrics or other criteria that separate particular content and skill areas

Self-Evaluation of Learning- Students complete rubric on own assignment, or rate their degree of learning/goal attainment

Student Behavior and Engagement - Track how many GTA sessions student attend; the frequency with which they contribute in discussion; examine peer assessments of participation during group work, etc.

Discussion and Observation - What have students told you about the course during office hours? What behaviors are you observing from successful students? Are there areas of the semester where you observe breakdowns in student progress?

 


Below we offer some quick pointers on how to analyze and reflect upon the data you have.

How can you organize, analyze, and interpret your data?

  • Examine the distribution of student scores across different dimensions of your rubric: how many students score very high? Very low? In the middle? Are some rubric dimensions systematically stronger than others?
  • Look at the percentage of students from various subgroups (grade level, prior coursework, sociodemographic traits, or ability bands) who obtained successful grades in the course overall oron major assignments.
  • In many departments, any given course will both take students from upstream courses and send students into downstream courses. Is it possible to collaborate with colleagues to examine how end-of-semester performance in your course is either shaped by upstream (prior) coursework or shapes downstream (future) coursework?
  • Across the same type of assignment or exam administered at multiple points during the semester, can you plot the trend line of students’ average grades? Can you apply the same rubric framework repeatedly to see how students across different rungs of a single dimension of the rubric?
  • Over time, can you examine the grade or performance trajectory of students who initially struggled or students who performed at high levels from the outset? Are the growth potentials of initially under-performing students growing or consistently low?
  • Has your course – or a particular assignment within the course – recently been redesigned, transformed, or flipped? Can you compare student performance data from this semester after similar data from previous semesters?
  • More generally, consider what types of pre-/post- comparisons you might leverage to explore changes over time – even if the comparison is predicated on how students enter and leave the course (i.e. asking students “how confident are you in …?” or “rate your level of knowledge on topic …” multiple times throughout the semester to assess changes that might be attributable to your course’s content and assignments)
  • Does your course rely on active learning, group work, or collaborative projects? Based on in-class observations or peer reviews of participation, you might be able to explore growth in group cohesion, collaboration skills, or other forms of interpersonal interactions between students.
  • Examine student feedback from required course evaluations or your own measure used to elicit feedback; consider how these impressions interact with major assignments, classroom dynamics, the sequencing and timing of tasks, lectures, and exams, etc.
  • Some of the best data garnered from student impressions is anecdotal, surfacing in conversation either during office hours or after class lets out for the day.
  • Consider data on class attendance, participation during in-class discussions, or completion rates of problem sets and other homework.
  • Look at levels of performance for students who did and did not engage with particular course resources or activities (e.g., how did students who participated in peer review of paper do compared to those who did not? How did interactions with a TA or discussion section increase students’ exam performances?)
  • Identify test items that map to different concepts or unit learning goals and examine the mean scores on those items to see which goals were best met, which ones were less well met, etc.
  • Think about how a major summative assignment – such as a research project or paper – draws on a variety of introductory, survey, and advanced learning.