Fine-Tuning Undergraduate Learning Outcomes Assessment
An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.
In fall 2007, the Kress Foundation Department of Art History began collecting and examining samples of student work with the aim of assessing learning outcomes and reevaluating its undergraduate curriculum. Since then, it has become apparent that such an assessment must be structured according to very specific criteria in order to achieve the most accurate results. Therefore, we defined our ratings scale more clearly, expanded the type and number of skills being assessed, and targeted them more directly. The experience of several years of fine-tuning this process allowed us to develop a more expansive, discipline-specific rubric to evaluate upper-level undergraduate writing that will facilitate future learning outcomes assessments and provide us with the most accurate data possible.
—Sally J. Cornelison, Amy McNair, & John Pultz (2011)
Background
Recently, there has been a shift among the Kress Foundation Department of Art History faculty members to focus on developing undergraduates’ writing and analytic skills as they prepare for future careers or graduate programs. We used the Provost’s initiative as a step toward deciding what we as art historians have in common and want to impart to the students in our discipline. As a diverse department, a discussion of methodologies could go on forever, but a discussion of pedagogical goals allowed us to focus on something concrete and find surprising commonalities.
Implementation
We chose to evaluate learning in 500-level art history courses, mostly made up of junior and senior undergraduates, so we could better clarify goals for students nearing the end of the program. Five faculty members submitted samples of student work from any one assignment they had given that semester, and we developed a simple rubric to evaluate this work according to the major skill sets we hoped to see.
Evaluation
Though samples of student work showed our upper-level undergraduates’ ability to describe and analyze pieces of art, we did not see evidence that they are engaging in other scholarly arguments or synthesizing information to form their own arguments. In the future, we will ask faculty to attach a clearer explanation of the specific goals of the assignment they submit, so we can see if students are unable to meet these goals or if they are simply not being asked to do so.
Reflections
We have an archive of work from the past year that we will evaluate, but first the committee will meet to decide how to specifically assess the skills we have not seen applied so far. Imagined fears of failure or judgment make the prospect of changing courses daunting, but we hope faculty recognize the wide variety of options available as we all use a new awareness of pedagogy to improve our courses.
Both faculty and students in art history participate in a field that focuses on translating visual images into verbal analyses and on responding to others’ interpretations of those images. However, when adapting this mission to our department and our courses, we had to cast a wide net to include the work of all faculty members. As a diverse department, a discussion of methodologies could go on forever, but a discussion of pedagogical goals allowed us to focus on something more concrete and find surprising commonalities.
Although the art history department has traditionally focused on a content-oriented curriculum, providing students with breadth of material, in the past few years there has been heightened interest in shifting our teaching to focus on writing and analytic skills, as well. For instance, though students may be exposed to both Western and non-Western art, they might not be exposed to a range of written assignments that allow them both to describe art and develop arguments about that art.
To begin this process, members of the department first met with Dan Bernstein, from the Center for Teaching Excellence, who helped us develop a plan to internally evaluate our own teaching. Dr. Bernstein introduced us to research in both teaching and learning that gave us a different framework for considering what we wanted our students to take away from their art history classes. Perhaps the most useful part of this discussion was on Bloom’s Taxonomy, which delineates a hierarchy of six levels of understanding—from knowledge and comprehension, to synthesis and evaluation—that educators might expect students to achieve as they complete coursework. Though art history faculty members certainly include these skills in their classes, we had never before taken a systematic look at how assignments across the department challenged students to reach higher levels of understanding.
Agreeing that students should be able to describe a “work of art” led us to a conversation about the focus of our courses and instruction within the broad context of visual culture. Some faculty members argued that they would not consider everything visual to be a work of art; others suggested that this definition could change depending on regional and social contexts. In the end, we agreed that “works of art and/or objects of visual and material culture” more broadly encompassed the diversity of work within our department, without forcing ourselves into some of the major debates within art history.
Our assessment began in 2007 with the identification of four basic skills we expect our students to acquire, the creation of a simple rating scale of 1 to 3, and the collection of samples of written assignments and exams from 500-level classes. The basic skills are the following:
- Students will be able to identify and describe a work of art or object of visual or material culture.
- Students will be able to analyze a work of art or object of visual or material culture.
- Students will be able to identify and analyze an argument made about a particular object or field of study.
- Students will be able to develop their own arguments about a particular object or field of study.
Our assessment continued into a second block of time, 2008 – 2010, and a third block, 2011, so as to evaluate the long-range application of these skills.
Assessment 2007
After working as a department to develop the four major skill sets we felt were necessary for students to acquire as graduates of our program, we considered where we might best find evidence of their accomplishments within the department. We decided that the students in 500-level courses, mainly junior and senior undergraduates, would be able to demonstrate more developed skills than those in survey classes that may emphasize more basic elements of writing and research.
To reflect this, we asked five faculty members to submit three or four examples of student work from any one assignment in their upper-division courses, whether essay responses from exams, term papers, or other assignments. In total, we received 19 pieces of student work. Since we solicited work in the middle of the semester, many faculty members simply contributed their latest assignments, which kept the collection of material diverse and representative. We hoped that allowing this sort of freedom in the process would help faculty see our request as a step toward improving department-wide teaching and learning strategies rather than as a threatening evaluation geared toward individuals.
We then developed a simple rubric to begin evaluating the extent to which each assignment demonstrated the four skills we expected to see. For each skill, the rubric gave a range of scores: the student utilized skills (3), the student attempted the skills but was unsuccessful (2), the student did not attempt the skills (1), or not applicable when the assignment did not ask for any use of the skill at all.
Three faculty members read and rated the samples of student work, but a flaw in our approach was immediately evident. It was not clear to the reviewers which, if any, of our four basic skills the test or assignment addressed.
Assessment 2008 – 2010
With the first round of learning outcomes assessment behind us, we continued to collect samples of student work from 500-level classes, but this time we asked the professor who submitted the work to provide us with guidelines that explained the goals and rationale for each assignment or test. In addition, with the aim of eventually assessing courses at other levels and better representing our pedagogical aims, we developed a more comprehensive list of skills we expect our students to acquire over the course of their undergraduate careers:
- Describe works of art and/or objects of material and visual culture
- Date works of art and/or objects of material and visual culture
- Recognize characteristics of style
- Analyze works of art and/or objects of material and visual culture
- Demonstrate satisfactory contextual understanding of works of art and/or objects of material and visual culture
- Critically analyze an argument
- Create and support an argument
- Demonstrate satisfactory research skills
Assessment 2011: Two case studies
In fall 2011 writing samples were collected from two 500-level classes, HA 503: Japanese Prints and HA 571: Modern Sculpture. The assignment from the first course required students to select one of several Japanese prints on display in the Teaching Gallery at the Spencer Museum of Art. The students then sketched the print they selected and wrote a three- to five-page paper in which they identified and described that work and considered how it fit into the Japanese art tradition and within the context of the material they had discussed in class. For the second course, students chose one of three objects on view in the Spencer Museum of Art’s 20/21 Gallery and wrote a 1,000- to 1,250-word collection catalogue essay on that sculpture. This assignment asked the students to discuss the biography of the artist who made the work in question, characterize the nature of the artist’s working methods and aesthetic, and offer a precise description, formal analysis, and interpretation of the work, supporting their essays with relevant contextual information. Once writing samples had been collected from 20 students in each class, one HA faculty member (the primary reader who did not teach either class) and one advanced graduate student (the secondary reader) then read and assessed each paper.
Assessment 2007
As a committee of four faculty members began reading and scoring each assignment using our rubric, we found that students received high marks, mostly scores of “2” or “3,” for the first two skills in which they were asked to describe or analyze a visual culture or art object. We were pleased that courses were clearly exposing students to these skills, but when we looked for evidence of the other two skills, analyzing scholarly argument s and developing their own arguments, in general it wasn’t there. Students most often received scores of “1” or “not applicable.”
This initially confused us, since we expected to find the best student work exhibiting all four skills. Gradually, we came to understand that different kinds of assignments might exhibit different skills and that we needed to focus on where to find evidence of the last two especially. Because much of what we had was student work, rather than the actual exam questions or the term paper prompts, the skills the professor may be targeting and whether that was being done directly or indirectly was unclear. For instance, an essay question from an exam might implicitly ask students to identify or analyze arguments from course readings, even if this is not apparent from the original question.
Assessment 2008 – 2010
Select faculty members read and rated samples of student work collected during the spring and fall 2008, spring and fall 2009, and spring 2010 semesters. Unfortunately, the results were not as clear as we would have liked, for we discovered that having professors provide specifications for the their assignments was not enough. That is, within each assignment, students were able to pick one of several possible questions to answer, explore different research topics, or respond to questions in a variety of ways. The result was a striking lack of uniformity in the skills attempted which, when computed according to the established scoring process, made it appear as if the learning goals often were not being met when, in fact, they were not essential for that particular question or assignment. We concluded that it was virtually impossible to arrive at a true sense of, or be able to graph, learning without undertaking a very controlled assessment.
In response to the problems encountered in our second round of learning assessments, in fall 2010 we solicited samples of written work from two introductory-level (100 & 200) classes that required skills 3 and 4 listed above. Once again, we asked the professor in question to include a copy of the assignment or exam key with each submission. This time, the results were much more satisfactory and accurate, with a majority of samples scoring a “3,” a few scoring a “2,” and none scoring a “1.” The only problematic issue that arose this time was that our original ratings rubric did not take into account work that somewhat successfully demonstrated mastery of a particular skill. Therefore, we revised it as follows:
4 = Student utilized the skill successfully
3 = Student utilized the skill somewhat successfully
2 = Student attempted to utilize skill but was not successful
1 = No evidence that student attempted skill
N/A = Skill not called for in this assignment
Assessment 2011: Two case studies
The assignment for HA 503 (Japanese Prints) did not require students to exercise or develop their research skills; therefore, each reader marked “Skill not Relevant to the Assignment” for the Research Skills criterion on the rubric. The Learner Outcomes Report generated from the data collected on each writing sample revealed that the primary reader was considerably more parsimonious in assigning a rating of 4 (excellent) to the four skills being evaluated (Control of Syntax and Mechanics, Content, Critical Thinking Skills, and Development and Support of a Thesis) than was the secondary reader. Apart from that discrepancy, the two readers’ evaluations were quite comparable. Students scored an average of 2.5 (between Acceptable [2] and Very Good [3]) in the Control of Syntax and Mechanics, Content, and Critical Thinking Skills categories. The weak link was the students’ ability to develop and support a thesis, on which they scored an average of 2.03 (Acceptable). Thanks to this data, the department can target the latter skill for development in future writing assignments.
The Learner Outcomes Report recorded slightly lower outcomes ratings for the sample writing assignment from HA 571 (Modern Sculpture). In this case the assignment did not require students to develop and support a thesis, so the readers marked that category “Skill not Relevant to the Assignment.” Students scored the highest average on the content of their papers (2.44) and the lowest on their research skills (1.86)—the latter despite the fact that key research materials had been put on reserve in the Art and Architecture Library and identified in the guidelines for the assignment. Another weak area was the students’ ability to control syntax and mechanics in their papers (2.11). As with the results of assessing the writing assignment for HA 503 described above, the department will use this data in its future efforts to help students hone their writing and research skills.
Our experience in developing the most effective way to evaluate our students’ written work was invaluable when the University mandated that all departments assess written communication at the undergraduate level during the 2011 – 2012 academic year. Using the AAC&U “Written Communication VALUE Rubric” as a template, we developed a rubric that we feel will more accurately assess the skills our discipline requires and that will provide us with accurate results to guide us in reevaluating our undergraduate curriculum. Although it borrows much of its structure and language from the one the AAC&U developed, by including an “N/A” category (skill not relevant to the assignment) our rubric recognizes that all writing assignments are not created equal and do not require evidence of the entire gamut of writing, research, and critical thinking skills. For example, a paper written in response to an assigned reading does not require evidence of a student’s research skills, and a written description of a work of art does not require the development and support of a thesis. We clarified the language of the skill set headings so that they identify our learning goals more specifically. We also felt that our ratings scale should include a way to document a student’s failure to demonstrate mastery of a particular skill. Therefore, a rating of 4 connotes excellence, whereas a 1 reflects his or her inability to reach the established educational benchmark(s).
Our goal is not to impose a uniform curriculum or pedagogical standard, but rather to define the conversation in a way that will encourage faculty to continue participating and learning from one another. We are hopeful that our pursuit of this project may help us move toward clearer, more developed goals for both lower- and upper-level students. Since students in upper-level courses are likely to go on to graduate programs or careers where writing abilities are key, it is important to us that we take responsibility as a department for our students’ writing.