Building KU's Teaching and Learning Community

Enhancing Student Engagement and Critical Thinking in a Large Classics Course — Pamela Gordon, Emma Scioli, Tara Welch (2016)

Overview

A team of three Classics professors redesign a series of assignments in a large, lecture-sized Classical Mythology course in order to deepen their students’ ability to approach the material critically and responsibly.

Background

Classics 148: Introduction to Classical Mythology functions as an introduction to contemporary research methodology and theory, both within Classics and the Humanities more broadly. It is taken primarily by entering students, and no knowledge of Latin or Greek is required for the course. The course is offered in face-to-face, online, and hybrid formats. In all three formats, discussion is a key component, though there is some lecture/presentation. Departmental faculty members rotate teaching this course, and each faculty member has leeway to select texts, themes, and approaches that fit their expertise and interests but that honor a basic structure. Forms of assessment are largely similar across the different sections of the course (all versions have quizzes, exams, and analytical work), although some variation is found (e.g., the number of quizzes). Readings are also largely similar but not identical. A general goal for the course is to inculcate the concept that myths have cultural and chronological relevance and were often developed and applied to address particular cultural concerns. The primary goal of the redesign is to deepen our students’ ability to approach the material critically and responsibly. This goal has two interrelated parts: to develop assignments that do not assume, but rather develop students’ critical thinking about Greek and Roman mythology, and to increase student engagement with the material.

Implementation

To accomplish our primary goals, our team has experimented with several assignment formats – a three-part scaffolded writing assignment, a “virtual museum” project, and a set of analytical worksheets. Each team member took the lead on one of these formats and scrutinized its strengths and weaknesses in achieving our goals.

Each scaffolded assignment had three components pertaining to the same material, an online assignment, an in-class discussion, and a written assignment. The scaffolding was designed to allow students to engage with the material and discuss some of its important aspects before doing the writing assignment, which was the culmination of the tripartite assignment.

The “virtual museum” project asked students to write a few informative, well-crafted paragraphs about an object of art depicting a Greek god or goddess who is identified by an attribute (or a combination of attributes) typical for that god or goddess. Students had to find an object at a credible site (a Museum database, for example); cite and describe it accurately; and write an informative piece on the image, incorporating at least one quotation from an ancient literary source or scholarly source.

The set of analytical worksheets were designed to help students build the skills necessary to write a strong analytical paper. As detailed in the Y1 portfolio, the worksheets did not function as scaffolding for the paper as well as we had hoped and remained confusing for students. In Phase 2, a course taught to 230 students in Fall 2015, Welch launched a revised set of analytical assignments (worksheets plus paper) that responded both to the difficulties we identified in Phase 1 and to some of the lessons we learned from the three-part scaffolded assignments and virtual museum in Phase 1.

Student Work

Data from the scaffolded assignments showed two points indicating that student learning improved from Fall 2013 to Fall 2014. The average for final scores in 2013 was 70.20% (median 72.87%). The average for final scores in 2014 was 75.97% (median 81.88%). This is an improvement of more than 5 % overall. In addition, the average grade on the Final Exam in 2013 was 62.63% (median (66%). The average score on the Final Exam in 2014 was 73.86% (median 83%). In this case, the overall improvement was over 10%. While there are several factors that might account for the improvement noted above, it is also likely that by introducing the scaffolded assignments and thus providing consistent opportunities to engage with the material beyond quizzes and tests led to better retention of the course material and thus higher scores on exams, which is reflected in the final grades for the course.

Data from the “virtual museum” assignment indicated that, for students who were already presenting A and B+ work in the course, the 2013 assignment was highly successful, but less accomplished students presented poor or mediocre work. The glaring problem was a lack of scaffolding. After analyzing students’ work from the 2013 course, we made dramatic changes to the assignment. Gordon designed this assignment in 2015 and piloted it in a different course (Classics 177). The major innovations of this new version are: 1) the use of scaffolding, and 2) required comparative work. Because Gordon developed this iteration of the Virtual Museum Project for Classics 177 rather than for our target course, we do not include the data here.

The data on the analytical worksheets suggests that the assignments did have a small but discernable effect on student learning in general and across specific measures. The repeated practice in identifying a thesis or argument led to an improved ability overall to create a thesis or articulate an argument. Similarly, paying attention to the organization of secondary sources seems to have helped students organize their own arguments effectively. These assignments were therefore successful in moving students from lower-order critical skills (recognizing, identifying, labeling) to higher-order critical skills (creating, articulating, building). At the same time, the assignments did not aid students in learning to find, evaluate, and deploy evidence in their own arguments. This shortcoming, we believe, can be attributed to the diversity of the assignments within the three. Overall, the shift in achievement seems to be strongest for students who started out with a weaker grasp of the targeted skills and knowledge.

Reflections

The key lessons on which we are moving forward are these:

  • From the three-part scaffold: Frequent and repeated patterns of work (preparation, deepening, synthesis) help students become familiar with intellectual tasks and develop those skills, and allow us to nudge them consistently toward deeper critical thinking. Some of this work should be independent, but some can or should be collaborative.
  • From the virtual museum: Strong preparatory work, both in research skills and, more importantly, in what constitutes a good subject, is crucial in yielding the sorts of outcomes we desire. The preparatory work is particularly crucial for less talented students.
  • From the analytical worksheets: As with the Virtual Museum, the preparatory worksheets especially helped weaker students achieve appropriate and satisfactory levels of critical engagement with the material. However, the preparatory work must really dovetail with the intellectual goals we seek. It is not enough to think more about a text or artwork; one must think in a focused way. Students need even more help identifying evidence appropriate to an argument.

We have decided therefore to shift the assessed work in this class to reflect that preparatory work and the goals we wish students to achieve. We have reduced the quantity and changed the nature of examinations in the course and increased the quantity and quality of the critical, analytical work we ask students to do.

Overall, our interventions have shown that greater attention to preparatory work yields the highest results for students who begin the course with weak skills in critical analysis and inquiry, and we have seen a clear decrease in the DFW rates since implementing some of our course redesigns. Since one early DFW bodes very ill for retention, and we start with so many students, this course redesign has the potential to make a real impact on the overall success of KU’s most vulnerable students.


^Back to top^

Background

Course Overview
Classics 148 examines a wide variety of Greek myths and their survival and metamorphosis in Latin literature. Throughout the semester, attention is given to the problems of comparative mythology and the related areas of archaeology and history. Significant attention is also given to the representation of myths in visual art. At a more general level, the course also functions as an introduction to contemporary research methodology and theory, both within Classics and the Humanities more broadly. No knowledge of Latin or Greek is required for the course, although students are expected to acquire and to be able to use technical terms (e.g., anthropomorphic, etiology, and polytheism). A general goal for the course is to inculcate the concept that “myths are good to think with,” that is, that myths have cultural and chronological relevance and were often developed and applied to address particular cultural concerns.

This course is taken primarily by entering students. For many of them, this course is the first and only time that they have ever been asked to think deeply about myths; they know various Hollywood versions, but they have never considered myth as an integral facet of ancient culture—or, for that matter, any culture. Throughout the semester, sources examined range across ancient texts from Greek and Roman culture, artworks depicting various scenes and characters from myths, and more modern renditions of ancient myths, particularly in film. The course is offered in face-to-face, online, and hybrid formats. In all three formats, discussion is a key component, though there is some lecture/presentation. Departmental faculty members rotate teaching this course, and each faculty member has leeway to select texts, themes, and approaches that fit their expertise and interests but that honor a basic structure. Forms of assessment are largely similar across the different sections of the course (all versions have quizzes, exams, and analytical work), although some variation is found (e.g., the number of quizzes). Readings are also largely similar but not identical.

Course Goals
The goals of the course are listed on the syllabus (pdf), as follows. Students should be able to:

  • recall and recount the plots and characters of major ancient Greek and Roman myths
  • situate Greek and Roman myths in the cultural framework of the people who told them
  • understand and use a technical vocabulary for the discussion of mythology in ancient literature
  • compare Greek and Roman myths to myths from other ancient cultures, and to modern myths
  • recognize several genres of ancient literature (tragedy, epic, hymn) and read them with ease
  • discern among and evaluate competing claims or approaches to myths or to the problems they explore
  • read critical scholarly literature, identifying and assessing the shape and agenda of a scholarly argument and interrelating it to other texts/readings
  • build and articulate your own scholarly argument about an ancient myth or myths
  • understand the cultural and individual contingency of myth
  • recognize myths at work and play in your own culture and life

Goals of Redesign
The primary goal of the redesign is to deepen our students’ ability to approach the material critically and responsibly. This goal has two interrelated parts: to develop assignments that do not assume, but rather develop students’ critical thinking about Greek and Roman mythology, and to increase student engagement with the material. To those ends, our team has experimented with several assignment formats – a three-part scaffolded writing assignment, a “virtual museum” project, and a set of analytical worksheets. Each team member took the lead on one of these formats and scrutinized its strengths and weaknesses in achieving our goals.

The Implementation section of this portfolio will discuss these three assignment formats in turn, including data collected and what we learned from the experimentation. Within the Reflections section, we suggest a way of combining the best elements of these three different assignments into a more cohesive, and repeatable, set of assignment types. Lastly, we explore the broader impact of this transformation on the class, the department, and the university.

Historical Context of the Redesign
Up to 2013, the course was taught to 500 students in a large lecture format with no discussion sections. It was a general education course designated to fit the category “HL” (humanities and literature). Every teacher selected different texts and emphases. Because of the size of the course, assessment was done mostly by exams and did not include the higher order goals listed above (bullets 5-9). In 2013, KU’s general education program changed to the KU Core, the course shrank, and was now a general humanities course rather than “humanities and literature.” We realized the need and opportunity to transform the course: we could broaden our focus on ancient literature to include visual culture; we could take advantage of the smaller class size to do more in-depth work (since we would have fewer students’ work to grade); we could transform time in class by lecturing less and discussing more, generally making the class more engaging; and we could position the course to fulfil KU’s critical thinking goal.


^Back to top^

Implementation & Student Work

PART 1: OBSERVATIONS ON PHASE 1 OF THE REDESIGN: INTRODUCING THREE SCAFFOLDED ASSIGNMENTS

Part 1: Observations on Phase 1 of the Redesign: Introducing Three Scaffolded Assignments

Three Three-Part Scaffolded Assignment Introduced in Fall 2014
Each Scaffolded Assignment had three components pertaining to the same material. The scaffolding was designed to allow students to engage with the material and discuss some of its important aspects before doing the writing assignment, which was the culmination of the tripartite assignment.

Assignment 1: Online assignment instructions, In-class discussion instructions, Writing assignment instructions
Assignment 2: Online assignment instructions, In-class discussion instructions, Writing assignment instructions
Assignment 3: Online assignment instructions, In-class discussion instructions, Writing assignment instructions

The writing assignments were graded according to rubrics.

Online Assignment. The primary purpose of the Online Assignment was to encourage students to complete the reading assigned in advance of the class. The students received full credit (3 pts) for their posts if they responded to the prompts and lost points if they simply failed to answer a question or wrote almost nothing.

In-Class Discussion. Students gathered in groups of 4-8 students to discuss their assigned prompt (A or B). One scribe was assigned per group to record the group’s responses in paragraph form. The names of the group members were recorded at the top of the page. Students were not given credit for this assignment, but the group responses were collected and participation for each student has been noted.

Writing Assignment. Students were asked to write a 1-page essay that elaborated upon the material they had responded to in the Online Assignment and had discussed in small groups during one class period. This was a higher stakes assignment than either of the preliminary steps.

Assessment data for Fall 2014

The following sections analyze the data and evaluate the work of a group of 80 students from the course. These 80 were chosen because they signed permission forms allowing their names to be used in sharing samples of their work in the portfolio.

1. In 2015 Scioli began evaluating the assessment data from Fall 2014 for the three-part scaffolded assignments. Starting with the work from a small sample of 5 students, Scioli learned the following:

  1. High performance on preliminary assignments (answering online questions and participating in class discussion) did not necessarily lead to high performance on the subsequent writing assignment.
  2. Strong students scored well on all three writing assignments, while weaker students’ scores went down on the second and third writing assignments. This is likely because the second and third writing assignments were conceptually more difficult (whereas the first assignment was more straightforward)

2. Examinations of the small data set (looking at grades and analyzing written work of 5 students) from this year provided the basis for Scioli’s discussions with her peer triad in Spring 2016. In summer 2016, Scioli undertook to evaluate the work of 80 students (chosen because they had signed waiver forms allowing their names to be used in the portfolio). Below are statistics for student performance across two categories. The first category is their performance on all three components of all three scaffolded assignments (tables 1-3). The second is the progress they made between writing assignment 1 and writing assignment 3 (tables 4 and 5).

Reflections on Assessment Data

Table 1.
Online Assignment 1. Most students (68 of 80) received a perfect score (3/3) on their Online Assignments. This data suggests that we achieved our goal in the sample group for participation, and that most participants did the minimum work required to receive a perfect score. Among the students who scored 3/3, there is significant variety in quality between those who did the bare minimum and those who gave more sophisticated responses (see examples here [pdf]).

In-Class Discussion 1. Although there was a high level of participation in in-class discussion 1, it is impossible to know which members of the group participated (since each group submitted a collective response. The submissions do show that each group engaged with the prompts and submitted work that reflected this. Again, there is a wide variety of quality and attention to detail in the responses.

Writing Assignment 1. The average score (9.3/10, A) on Writing Assignment 1 is high. Scores of 10/10 are not abundant (less than one quarter of the grades), but there are several 9 and 9.5 grades. The lowest score is a 6 (just one student). No one who submitted the assignment received a failing grade. While there was notable difference in the level of quality between some excellent and nuanced papers (10/10) and other papers that simply checked the boxes (9/10), this disparity was not reflected in the grades (A vs. A-). [Click here for pdf examples of assignments that received 9/10 and 10/10 and discussion of the distinctions]. While there is surely work to be done to improve the rubric and make the grading of these assignments more rigorous, it is nevertheless encouraging that we saw a high level of student engagement across all three components and writing assignments that followed the instructions closely. Because the prompt for the writing assignment followed very closely upon the prompt for the online assignment, students who had done the online assignment went in to the paper with material already selected for their evidence, and thus did not have to work from scratch in composing the paper.

Examples of Writing Assignment 1 showing varying levels of achievement and assessment of the differences (all pdf): low, middle, and high. Faculty assessment of this work (pdf).

Table 2.
As is evident from the data on Assignment 2, students struggled more with this assignment than with the others. The reasons for the struggles, and hence the lower grades on both the online assignment and the writing assignment, are twofold. First, the reading material for this online assignment represented a departure from the previous assignment in that it asked students to read a scholarly essay and answer questions about it. Secondly, for the writing assignment, students had to relate the ideas presented in the scholarly essay back to the primary text assigned for that week. Reading the essay was additional work on top of the primary reading and, moreover, was an unfamiliar type of reading. The writing assignment, which was meant to be the synthesis portion of the project, asked students to synthesize ideas from the essay and from the primary text, thus presenting a new challenge. An additional pitfall was that students had trouble defining the term “feminism” as applied to a literary text, instead relying upon a contemporary or “popular” definition of the term that did not correlate directly to the text of Euripides’ Medea.

Online Assignment 2. This assignment asked students to read a scholarly essay posted on Blackboard and to choose between two prompts. Both prompts asked the same first question, which was simply designed to get students to find a specific argument in the essay and summarize it. The second question in each prompt asked them to identify evidence presented in the essay for reading Euripides’ play Medea as either feminist or misogynist. The idea behind this second question in the prompt was to get students to recognize and understand how a scholar makes and supports an argument in an essay by using examples from a primary source text. I was hoping that students would be able emulate this in the writing assignments that would follow upon the online work. (See examples here [pdf].)

In-Class Discussion 2. A slightly smaller number of the group participated in discussion. Looking at the responses, I note a wide variety of definitions of the terms “feminism” and “misogyny,” many of which are standard definitions of the terms, but are not directly applicable to analysis of Euripides’ play. Also noted is the frequent use of the non-existent term “feministic.” Within the groups, students’ abilities to find examples of passages within the play to support a feminist or misogynist interpretation vary greatly. Some groups may not have had access to the texts (even though students were advised to bring their copies to class). For the most part, however, I see engagement with the question and an attempt to track down passages that would be useful in arguing for a feminist interpretation of the play. The groups tasked with finding evidence for a misogynist reading of the play were less successful, as they got hung up on reading the character Jason as a misogynist, rather than thinking about how one could argue that the playwright Euripides underscores Athenian misogynist ideals through his depiction of both central characters (which is the argument of the scholarly essay they read).

Writing Assignment 2. This assignment asked students to follow up on the work they had done for the online assignment and in-class discussion by responding to a statement from the assigned scholarly essay and either agreeing or disagreeing with it, by arguing for or against its suggestion that Medea is in fact NOT a feminist play. The prompt was meant to get students to argue for the play as either feminist or misogynist (as the online assignment prompt had prepared them to do), but when worded in terms of a response to a specific statement, it may have been mistaken for a request for new information.

What we had hoped that students would get out of reading the essay is that the play can be deemed misogynist or feminist based not upon one character’s actions, but upon its statements about the plight or role of women in 5th-century Athenian society and whether or not it condemns or upholds this societal oppression. But many students fell into a similar trap in the paper, arguing that the play is feminist because Medea (a woman) is the central character and is successful at getting revenge against her husband. Although it was not a requirement that students should emulate the methods of the scholarly essay in their own writing, I had hoped that after engaging with the essay in the online assignment and discussing its ideas in class, they would be more prepared to create an argument that was more in line with the author’s approach to defining the play as either feminist or misogynist. This desire needed to be more clearly articulated in order for students to pick up on it.

Examples of Writing Assignment 2 (all pdf): low, middle, and high. Faculty assessment of this work (pdf).

Table 3.
Online Assignment 3. This assignment asked students to read the story of Pygmalion in Ovid’s Metamorphoses. The questions range from asking students to identify simple plot points to asking them to think about the meaning of Ovid’s language, such as his use of a simile. Although this assignment saw a drop off in participation from the previous one, there was still a high rate of participation among the group considered (more than 90% participation rate). (See examples here [pdf].)

In-Class Discussion 3. Although this in-class discussion took place close to the end of the semester, there is still strong participation (more than 75% of the group). Generally speaking, participants produced thorough responses to the two questions posed, each of which asked students to compare the story of Pygmalion with others they had read in Ovid’s Metamorphoses. The responses show a number of citations of direct lines from the text, indicating that students had heeded our request for them to bring their texts of the poem to class.

Writing Assignment 3. The average for the group was up on this assignment from the previous one, although it did not reach the level of Writing Assignment 1. This assignment was challenging, as it asked students to read a poem by a contemporary author and consider it as a response to Ovid’s story of Pygmalion. The goal was not to compare the modern poem and the ancient poem in terms of their quality or “authenticity” (as several students misinterpreted the prompt to mean), but rather to comment upon the modern poem as a critique of Ovid’s presentation of gender roles in the poem.

Examples of Writing Assignment 3 (all pdf): low, low/middle, middle, and high. Faculty assessment of this work (pdf).

Tables 4 and 5. Tracking Progress from Writing Assignment 1 to 3.
A small number of students experienced no change in their scores from Writing Assignment 1 to Writing Assignment 3. On average, scores went down from Writing Assignment 1 to Writing Assignment 3. About twice as many students in each Group saw a decline in scores as those who saw an improvement in scores on the later assignment. Many students saw only incremental change (up or down) from Writing Assignment 1 to Writing Assignment 3, but where there was a difference of more than one point, there were far more who experienced a decline by 1 pt or more than those who experienced an improvement of 1 pt or more.

Observations about the Differences between Assignments 1 and 3

This section provides a comparison between the 3 individual components of Scaffolded Assignments 1 and 3, suggesting the strengths and weaknesses of each and analyzing how they prepared (or failed to prepare) students adequately for success on the Writing Assignment for each (which was the high-stakes component of each tripartite assignment at 5% of the final grade for the course).

Online Assignments 1 and 3.
Online Assignment 1 asked students to select a passage of their choosing from a section of Homer’s Odyssey. They were told to focus on a passage illustrating something common in the narrative, a god’s assistance of a mortal character. They then had to answer 6 questions about this passage. The first 5 questions asked them to provide evidence from the text to describe and contextualize the scene they had chosen. The final question asked for speculation about what their chosen scene might reveal about Greek attitudes toward divine intervention in the lives of mortals. I had not discussed Greek attitudes towards divine intervention in class, but speculation about this idea was going to be a feature of in-class discussion, so I wanted students to give some thought to the idea of how the material they encountered in literature might pertain to lived experience in advance of the discussion. They were asked to combine their responses to the individual questions into a unified paragraph.

One goal of this assignment was to introduce students to the proper way to cite lines from a primary source text. A second goal was to give them practice with writing a paragraph that integrated responses to individual questions.

While the assignment did get students to isolate a passage, giving evidence for a particular theme in the text and providing precise book and line numbers for the passage, it would have been more helpful to ask them to integrate citation of the text into the body of a paragraph, rather than to have it simply as a header.

Online Assignment 3 also asked students to locate evidence in their texts (this time Ovid’s Metamorphoses) in order to provide descriptive and contextual information about the passages. This time I assigned the passage (the story of Pygmalion in Book 10 of the Metamorphoses) instead of asking students to choose an example of a certain theme. Rather than synthesize answers to the questions in paragraph format, the students responded to 6 individual prompts that asked students to identify and analyze certain features of the Pygmalion story. Each response had to contain a minimum of 3 sentences.

The basic goal of this assignment was to get students to read the Pygmalion story closely. A secondary goal, as demonstrated in several of the prompts, was to allow students to practice paraphrasing the text in their own words, rather than citing directly from the text. Question 5 was the only question that asked students to compare an aspect of the story of Pygmalion in Ovid’s poem with a passage they had encountered in an earlier reading.

While the assignment did force students to read the text closely, paying attention to details such as the narrator of the story and the meaning of an individual simile, its questions did not correlate directly to the prompt for the writing assignment.

In-Class Discussion Assignments 1 and 3.
The in-class discussion prompts for both Assignments were similar in that they built upon the Online Assignment due the night before class. Both assignments asked students to draw upon their close reading of the assigned texts required for the Online Assignment to find evidence for some aspect of the work. Both assignments also asked students to find an example from elsewhere in the text to compare with the initial evidence, thus encouraging students to make connections among themes or episodes within a larger text. Because of the continuity among the in-class discussion assignments, students had ample opportunity to practice a particular set of skills.

Writing Assignments 1 and 3.
Writing Assignment 1 asked students to respond to a prompt that drew directly from the questions they had been asked to respond to in the online assignment and to the in-class discussion assignment that preceded it. Students who had completed both preliminary assignments had the foundations they needed to complete the writing assignment successfully: they had at least one example from Homer’s text that they could use for the two required for the paper and they had already thought about the question of Greek attitudes towards divine intervention in the lives of mortals. The writing assignment allowed students to contemplate and develop ideas they had already presented in online and in-class forums.

Writing Assignment 3 asked students to interpret a modern poem as a response to an ancient myth that we had read in class. While students had been exposed throughout the course to examples of the reception of ancient mythology in modern poetry, visual art, and film, this was the first time they were asked to interpret the modern work for a writing assignment. This made the assignment more difficult than Writing Assignment 1, for example, because it asked students to consider a new text in conjunction with the one they had read for the course. Furthermore, Writing Assignment 3 asked students to perform a task that they had not done on either of the preliminary assignments. While the online and in-class discussion assignments asked students to read the Pygmalion episode closely and think about it within its wider context in Ovid’s Metamorphoses, neither assignment asked students to think about the myth outside of its context or to think about the larger relevance of the myth for modern culture. This lack of continuity between the online and in-class assignments and the final written work may have also contributed to the lower grades on Writing Assignment 3.

Observations about Overall Improvement in Student Learning from Fall 2013 to Fall 2014

Two data points indicate that student learning improved from Fall 2013 to Fall 2014.

  1. The average for final scores in 2013 was 70.20% (median 72.87%). The average for final scores in 2014 was 75.97% (median 81.88%). This is an improvement of more than 5 % overall.
  2. The Average grade on the Final Exam in 2013 was 62.63% (median (66%). The average score on the Final Exam in 2014 was 73.86% (median 83%). In this case, the overall improvement was over 10%.

There are several factors that might account for the improvement noted above. Teaching the course for the second time, Scioli was more comfortable with the material and had made improvements to the choice of texts for the course to reflect student interests and abilities. It is also likely that by introducing the scaffolded assignments and thus providing consistent opportunities to engage with the material beyond quizzes and tests led to better retention of the course material and thus higher scores on exams, which is reflected in the final grades for the course.

PART 2: DEVELOPMENT OF THE VIRTUAL MUSEUM: DATA AND OBSERVATIONS

Part 2: Development of the Virtual Museum: Data and Observations
Observation on Close Reading of Images (“Virtual Museum”)

Description of the Assignment
This is a project that Prof. Gordon originally created for Classics 148 in Fall 2013. It was originally called “Web-based Research Assignment: Recognizing the Gods and Goddesses in Art.” This assignment asked students to write a few informative, well-crafted paragraphs about an object of art depicting a Greek god or goddess who is identified by an attribute (or a combination of attributes) typical for that god or goddess. Students had to find an object at a credible site (a Museum database, for example); cite and describe it accurately; and write an informative piece on the image, incorporating at least one quotation from an ancient literary source or scholarly source.

It originated as a writing assignment that required students to integrate a) their understanding of a primary text with b) some basic scholarly information about the Greek divinities, and c) a newly acquired ability to recognize the divinities as they appear in ancient art. After reading the primary text (the Homeric Hymns) and working through several small assignments that checked reading comprehension (over a period of around ten days), students used a museum database to select and analyze an object that displayed at least one attribute (an object associated with a particular divinity in Greek or Roman visual culture) and/or an example of traditional iconography (a mode of representation associated with a mythological story).

Phase 1: Assessment of student work on the “Web-based Research Assignment: Recognizing the Gods and Goddesses in Art”

Phase 1.
This analysis focuses on four representative examples of the work on this assignment as conceived by Gordon in Fall 2013.

Reflections on the Data

For students who were already presenting A and B+ work in the course, the 2013 assignment was highly successful, but less accomplished students presented poor or mediocre work. The glaring problem was a lack of scaffolding. The successful projects began with the students’ astute selection of objects from the data base. Their choices demonstrated not only their grasp of the subject matter and their attention to detail, but their thoughtful consideration of the themes of the course. But other students failed to produce good work simply because they had chosen uninteresting or inappropriate images. There was no mechanism for intervention by the instructor, so their first false steps resulted in poor work on the entire project.

Early intervention would have saved the weakest students from missing the opportunity to study an antiquity, and from nearly failing the assignment. But appropriate scaffolding, and in-class discussion or other pre-assignments would likely have led to a higher degree of success among students who were not already more accomplished. Even students at the top of the class might also have learned more from the assignment if they had been required to offer an in-depth comparison. The best work went well beyond the instructions. Better instructions might lead to stronger work by students who do not automatically take an idea and run with it.

Phase 2

After analyzing students’ work from the 2013 course, we made dramatic changes to the assignment originally conceived as the “Web-based Research Assignment: Recognizing the Gods and Goddesses in Art.” The project has now become the “Virtual Museum Project.” Gordon designed this assignment in 2015 and piloted it in a different course (Classics 177). The major innovations of this new version are: 1) the use of scaffolding, and 2) required comparative work. She will include a new iteration of it when she next teaches the Myth class.

The Virtual Museum Project as assigned in 2015 incorporated multiple steps that included more guidance and opportunities for feedback at the preliminary stages. This resulted in a much higher level of achievement. Because students were required to share their first step with the class, there was also more collaboration and intervention at this early stage. In addition, Gordon incorporated a simple development proposed by Scioli. This was to have students choose more than one image for their Virtual Museum, giving them the opportunity to observe meaningful similarities and contrasts (due to differences in genre, date, material, purpose, the artist’s innovation, etc).

Meanwhile, major museums were expanding their online databases and changing their policies on the use of images for non-commercial purposes. When Gordon first began this project, only the British Museum had a generous policy. But now other major museums such as the Met and the Museum of Fine Arts allow students to reproduce images from online collections, and even to post them online. Thus, students who have particular interests are able find objects that they really want to study, and there are more opportunities for comparative work.

The strength of the Virtual Museum Project as designed in 2015 is the carefully-designed scaffolding leading up to the final project. But a new problem developed as the Project became more sophisticated: Students had trouble with the mechanics of loading their images onto our virtual museum site (on the Omeka platform). The platform gave the assignments a very professional appearance, but the program is not as user-friendly as Gordon had thought. This led students to focus too much on technical issues instead of on their research. In the future, Gordon will experiment with Scioli’s suggestion that students present their research in a much simpler format using the blog feature on Blackboard.

Because Gordon developed this iteration of the Virtual Museum Project for Classics 177 rather than for our target course, we do not include the data here.

PART 3: DEVELOPMENT OF THE ANALYTICAL ASSIGNMENTS: DATA AND OBSERVATIONS

Part 3: Development of the Analytical Assignments: Data and Observations

Description of the Assignment
Professor Welch’s target for transformation was a pair of analytical worksheets (worksheet 1 and worksheet 2) designed to help students build the skills necessary to write a strong analytical paper. As detailed in the Y1 portfolio, the worksheets did not function as scaffolding for the paper as well as we had hoped and remained confusing for students.

In Phase 2, a course taught to 230 students in Fall 2015, Welch launched a revised set of analytical assignments (worksheets plus paper) that responded both to the difficulties we identified in Phase 1 and to some of the lessons we learned from the three-part scaffolded assignments and virtual museum in Phase 1.

First, all three analytical assignments now include an overview linking them to the particular goals listed on the syllabus that they address. All three assignments emphasize consistent learning goals across them in the hope of improving student mastery of certain critical skills (identifying/creating a thesis, organizing an argument, providing supporting evidence). We made these learning goals more visible to students on the worksheets and in the paper prompt.

Second, the worksheets themselves have been improved. The redesigned questions lead students more carefully from description and judgment to analysis; the workflow from description to analysis is made evident to the students with clear signposts, literally “Describe:” and “Analyze:”, helping the students know what sorts of answers might be expected. Moreover, the questions on the worksheets emphasize key aspects of paper writing, thus positioning them even more precisely as preparatory work for the final paper.

Finally, we cleared some class time for discussion of these assignments in advance of the due dates to clarify our goals, to practice the skills, and to respond to student queries.

Data for the three critical assignments

Using a 3 point rubric so as to align with Prof. Scioli’s data above, and responding to the three intellectual skills we hoped to inculcate, we collected data from all three assignments that pertains to identifying/creating a thesis, organizing an argument, and providing supporting evidence. The sample size is 22 students out of 200 enrolled; these 22 were selected randomly from the sub-group of those who had completed all the assignments in question.

Across the three assignments, we found that students made significant strides in identifying and articulating a thesis. Their understanding of the relationship between organization and argument did not change substantially; not surprisingly, it proved slightly harder to organize their own argument in the paper than it was to recognize it with the aid of pointed questions in another person’s work (WS2); most difficult was WS1, in which they had to recognize the organization of a website and how that organization promoted a certain concept of/argument about ancient mythology. Finally, the preparatory assignments did little to help them choose and use strong evidence to support their own argument.

Examples of Worksheet 1 (all pdf): fair and good
Examples of Worksheet 2 (all pdf): fair, good, and excellent.

Chart 2 collects the number of students who earned a score of 1, 2, and 3 on the thesis portion of their analytical assignments. Note the total number of low (1) scores dropped across the three assignments (9 students earned 1 on their first worksheet, 7 on the second, and only 5 on the paper); the number of 2 scores increased (6 students earned 2 on WS1, 13 on WS2, and 10 on the paper); and the number of high (3) scores remained roughly the same (7 on WS1, then 2 on WS2, then 7 on the paper). This suggests that the improvement in identifying and articulating a thesis was greatest for students who were initially weakest in that skill: students who initially earned 1s moved up to 2s. These initially weaker students are precisely the ones we aim to reach by improving the assignments’ scaffolding and working to build critical skills through repetition of related exercises. Not included in this chart, but relevant to it, is the fact that, looking at individual students’ trajectories, 9 improved over the 3 assignments, 9 decreased, and 4 showed no change from first to final assignment. Since the overall average improved, the average increase for the increasers was greater than the average decrease for the decreasers.

Similarly, in terms of organization, the number of poor scores (1) decreased across the semester (7 to 4 to 3) while the number of good (2) scores increased (10 to 11 to 17). The increased average in organization scores was therefore achieved not by strong students becoming even better, but by weaker students becoming proficient. Looking at individual students’ trajectories on this measure, 10 students improved their scores on organization; 7 decreased their scores; and 5 remained roughly consistent across the three assignments.

Chart 4 unpacks the disappointing results in student achievement in choosing and using evidence to support arguments. The number of 3 scores remained steady across the three assignments (5 to 5 to 5) – strong students maintained their strength throughout – but more students got poorer scores (1) as the semester went by (3 to 5 to 8; see the rising trend in the first cluster from blue to red to green), and fewer got good grades (2) (14 to 12 to 9; see the falling trend in the second cluster from blue to red to green). To be sure, this data does not show which students rose and which fell; some students might have risen from 1 to 3 and others fallen from 3 to 2, for example. Nevertheless, looking at students’ individual trajectories across this measure, of the 22 students included in the data, 6 improved their scores for “use of evidence” across the 3 assignments; 5 remained the same throughout; and 11 had scores that decreased. It seems clear that the assignments are not effective at helping students master the use of evidence.

Reflections on the Data

Did the three analytical assignments build students’ skills and lead to greater outcomes?
The data suggests that the analytical assignments did have a small but discernable effect on student learning in general and across specific measures. The repeated practice in identifying a thesis or argument led to an improved ability overall to create a thesis or articulate an argument. Similarly, paying attention to the organization of secondary sources seems to have helped students organize their own arguments effectively, though the improvement is incrementally smaller than for the articulation of the thesis. These assignments were therefore successful in moving students from lower-order critical skills (recognizing, identifying, labeling) to higher-order critical skills (creating, articulating, building).

Nevertheless, the assignments did not aid students in learning to find, evaluate, and deploy evidence in their own arguments. This shortcoming, we believe, can be attributed to the diversity of the assignments within the three. In the first assignment (WS1), the students were asked directly, “What sort of evidence do the authors use to make their points?” In the websites targeted for the assignment, one palpably used no evidence and the other used many, many textual and artistic resources to back up claims. The evidence question was perhaps misleadingly easy on this first assignment, posing an artificial high at the get-go. The task of evaluating evidence was somewhat more difficult for the second worksheet, which covered an argument made in academic prose. Even still, we would have expected this measure to be more consistent between WS2 (identifying evidence in academic prose) and the paper (using evidence in academic prose).

Which students were affected?
As we mention above for graphs 3.a.2, 3.a.3, and 3.a.4, the shift in achievement seems to be strongest for students who started out with a weaker grasp of the targeted skills and knowledge. This is true for the improvement – weak students showed the most improvement in their scores as a result of the retooled assignments – and for deterioration. Strong students remained strong, and their scores stable, throughout. Apparently, students already acculturated to such skills and tasks as these assignments did not benefit as much from the increased scaffolding and the practice, and were able to apply the skills in a variety of settings. Similarly, students less acculturated to identifying a thesis, understanding the role organization plays in making an argument, and using effective evidence learned how to do so by working through these assignments, and were more vulnerable to the difficulties of applying those skills in diverse contexts.

Discussion assignment description

Also as a result of findings in Phase 1, Welch incorporated online and in-class discussion across the semester to deepen students’ engagement with the material and to promote learning from and among peers.

The “Three-part scaffold” detailed above included online and in-class discussion as a way to engage students in the material and to promote their deeper preparation for writing assignments. As a further experiment, in Welch’s F15 class we disaggregated discussion from the paper and offered seven opportunities across the semester for students to respond to a prompt on the Bb discussion boards, in small groups, and then to participate in a deeper discussion in class the following day. Only the online component was assessed. Students had to complete any 5 of the 7 assignments; each was due at a set point in the semester.

Data for Discussion Assignment

We measured student scores on a 10-point scale; for the purposes of this portfolio, we scaled the scores down to match the 3-point scale we have been using throughout so far. Because the students could choose which 5 prompts to respond to, we measured one early score (prompt 1 or 2), one mid-semester score (prompt 3, 4, or 5), and one late score (prompt 6 or 7) from the 7 discussion assignments. We hoped to see some improvement across the semester, but this improvement is not in evidence in the data.

The steady decline in discussion scores across the semester is due, I believe, to fatigue. Furthermore, a trajectory of individuals’ scores shows that the decrease is across-the-board; it is not the result of the steep decline of a few students, but the steady decline of the majority. 5 students improved their discussion scores across the three assessed posts. 4 remained the same, and 13 saw falling scores. Click the following links for samples of fair and excellent student work (with explanation of those ratings).

Examples of online dicussion posts (all pdf): fair and excellent.

Reflections on the Discussion Data

Why did scores on discussion posts fall?
Scores dropped consistently across the semester on the discussion posts. One reason for this might be simply fatigue, as students are burdened with more responsibilities toward the end of class than they are at the beginning. The assignment structure might also have contributed. Students were given seven opportunities to participate in targeted online discussion over the course of the semester; they were required to participate in five in order to earn points for participation. Eager students got a good start on the first online forum, and weaker students left off their participation until the end. Our sampling sought to mitigate this effect, since we selected students who had participated in #1 or 2, #3 or 4, and #6 or 7, but this structure, and the possibility it left open for students to wait until the last minute to complete the assignment, might have left a trace in the scores.

PHASE 3: IMPLEMENTATION AND DESCRIPTION

Based on our results from Phases 1 (2014) and 2 (2015) across all types of assignments, our broad finding is that students in this course need help moving from simply reading the assigned material for basic content to achieving the level of critical interpretation we desire for them. This need for help was consistent regardless of the type or content of the assignment. The key lessons on which we are moving forward are these:

  • From Scioli’s three-part scaffold (Phase 1, 2014): Frequent assignments distributed across the span of the semester ensure that students read the assigned material at a sustained pace. Repeated patterns of work (preparation, deepening, synthesis) help students become familiar with intellectual tasks and develop those skills necessary for success on an assignment, and allow us to nudge them consistently toward deeper critical thinking. Some of this work should be independent, but some can or should be collaborative.
  • From Gordon’s virtual museum (2011, 2015): Strong preparatory work, both in research skills and, more importantly, in what constitutes a good subject, is crucial in yielding the sorts of outcomes we desire. The preparatory work is particularly crucial for less talented students.
  • From Welch’s worksheets (Phase 2, 2015): As with the Virtual Museum, the preparatory worksheets especially helped weaker students achieve appropriate and satisfactory levels of critical engagement with the material. However, the preparatory work must really dovetail with the intellectual goals we seek. It is not enough to give students a second opportunity to think more about a text or work of visual art; instructor comments must suggest to students how to approach the assigned material with a focus upon critical analysis. Students need even more help identifying evidence appropriate to an argument.

We therefore decided in Phase 3 (Fall 2016) to shift the assessed work in this class to reflect our emphasis upon the importance of preparatory work and our conviction that more opportunities to engage with the assigned material across the breadth of the semester will help students to reach the goals we wish them to achieve. We thus reduced the quantity and changed the nature of examinations in the course and increased the quantity and quality of the critical, analytical work we ask students to do. For the latter, critical work, we allowed students to practice and thus gain experience on a certain topic or methodology by giving assignments that call upon the same skills across several assignments. To this end, we designed seven assignment clusters, each with three parts: an online preliminary assignment, an in-class discussion, and a component that is turned in for nuanced evaluation, often a writing assignment. Each of the assignment clusters was designed to enable students to practice the same skills – paying attention to details of context, such as plot and structure while reading a text, and iconography and medium while studying a work of visual art; gathering evidence to support an argument; and learning to synthesize these preliminary observations in a plenary assignment. What is more, students repeated these skills in three thematic areas: two of the seven assignments focus on close textual reading, two on secondary sources, and two on analysis of visual art. The seventh assignment combines text, art, and secondary sources.

Assignment 1
Assignment 5
Assignment 2
Assignment 6
Assignment 3
Assignment 7
Assignment 4

The assignments linked in the chart above reflect various interventions made in Phase 3:

  • We aligned the preliminary assignments (online and in class) more closely with the prompt for the writing assignments, so that students had a chance to practice skills required for success on the paper (which was worth more of their grade than the lower-stakes preliminary assignments).
  • We scaffolded the assignments so as to help students become familiar with and have an opportunity to discuss the primary material, and included material designed to make students aware of how doing the preliminary assignment seriously would help them to write better papers.
  • We offered a (low stakes) grade for the in-class participation so that there was more incentive to take it seriously.
  • We strengthened online discussion. We continue to believe that having students post their responses to online discussions within small groups, rather than as a discussion forum for the whole class, offers students a sense of community. Small discussion groups for the in-class discussion furthers the same goal of developing community and sharing ideas. Especially valuable in a large lecture course is the opportunity for students who might be intimidated by speaking before an audience of 250-300 students to share their ideas in small groups. Students remained in the same small online discussion groups throughout the semester.
  • We maintained small-group discussion in “flipped” classes. The groups in class were flexible but overlapped from class to class, since they were assigned based on location in the classroom.
  • We improved the rubrics. The rubrics for the Online and Writing Assignments are now more detailed and coordinated to the actual assignment instructions given to students. We used the rubric feature on Blackboard to track students’ work.
  • We took greater care to ensure that grading standards and feedback were consistent across all graders. If written/analytical work counts for more towards the final grade for the course (because of the reduction in points of the exams), there is greater need need to establish even more specific and detailed guidelines for grading this type of work and to make sure that we allow instructors time for the nuanced evaluation required.
Data and Observations

We gathered data across all assignments and for all learning outcomes (1. the articulation of an argument; 2. the use of appropriate evidence; 3. clean mechanics, such as spelling and citation style). Our data set includes scores for 36 students, culled from a subset of those who gave us permission to use their work AND who had completed all the assignments (we thought the inclusion of zeroes would muddy the picture of student achievement we were seeking). Each student has 36 scores, resulting in the robust set of 744 data points as follows:

  • 36 students, each completing six analytical assignments—We ran out of time and skipped the second half of AA5, so we omitted data for AA5 altogether. Our data set therefore comprises AA1, AA2, AA3, AA4, AA6, and AA7.
  • Each of these comprises two parts: a preliminary online post and a more nuanced and complex essay. These are called AA1.1 and AA1.3, AA2.1 and AA2.3, AA3.1 and AA3.3, etc. The middle part of each assignment (AA1.2, AA2.2, AA3.2, etc.) was in-class discussion, graded simply as present/not present. We therefore left it out of this assessment.
  • Each of the two parts sought to practice and assess three distinct learning outcomes (the articulation of an argument, the use of appropriate evidence, and clean mechanics, such as spelling and citation style).

To illustrate the patterns of learning, we include samples of work from three students: two who showed steady improvement over the semester (Student A, Student B), one who started strong and stayed strong (Student C), and one whose work declined in proficiency toward the end (Student D).

Chart 1 - Overall performance, average and median, in percentage

Chart 1 gives an overview of the results for each of the 36 students over the entire semester. It shows average (blue) and median (orange) student performance for all 6 assignments, 2 parts, and 3 learning goals. The dotted trend line in each shows a gradual and consistent rise in scores from beginning to end of the semester; both the average score and the median score rose around 10% over the course of the semester. The fact that the median is higher than the average suggests that a few quite low scores are pulling the average down (the very low scores are lower than the average than the higher scores are higher).

We are pleased with the steady improvement throughout the semester, as it is exactly what we hoped and expected from the series of repeated assignment types. This overall data shows that frequent opportunities to engage with the assigned material and to practice with a set of critical skills leads to measurable growth. Student A’s and Student B’s work are good examples of this trend.

Chart 1: Overall scores, all assignments, in percentage

Charts 2-5. Progress across learning goals from beginning to end across the semester

In Year 2/Phase 2 (2015), we set out to measure achievement across four learning goals in writing assignments:

  1. Articulation of a clear and compelling argument;
  2. Skillful use of appropriate evidence;
  3. Competence at organizing an argument and its evidence; and
  4. Professionalism in presentation (clean of typos, errors in spelling and grammar, etc.).

In Phase 3 (2016), as the term progressed, we realized that organizational skill was tough to assess in such short assignments and was a skill perhaps better suited to a writing course than to the critical thinking course. We therefore demoted organization in terms of point values and promoted mechanics (argument, evidence, professionalism in presentation), so as to encourage students to attend carefully to their presentation (we noticed them being careless). Hence, “3. Organization” is not reflected in our data set nor in this report.

Chart 2: Overall growth across three learning goals

Chart 2 presents growth in all three learning goals, from the first analytical assignment to the last (in other words, it erases the ups and downs in between). Another way of looking at this chart is that it breaks down the 10% average overall gain in Chart 1 and represents how that gain is distributed across the intellectual skills. In all three skills, students showed growth. We are particularly gratified to see the strong response in use of evidence, since this involves close attention to details that exist in the primary sources, and thus likely reflects the students’ increased abilities to read the material with attention to details beyond simply plot and character. Often students come to class unaware of the differences between sources for Greek and Roman mythology, and expecting a monolithic body of source material. One of our main goals in the class is to dispossess them of this notion and introduce them to the complexity of the ancient world. This goal is achieved by paying close attention to individual texts and noticing the differences among genre and cultural context for the assigned readings.

The next three charts break down Chart 2 further, looking at the students’ trajectory in each critical skill over the course of the semester.

Chart 3 shows their growth in argumentation from assignment to assignment. The strong decline in AA6.3 is perhaps a result of the assignment itself. In AA6, students were asked to interpret images on ancient coins as meaningful presentations of myth. Students had a difficult time understanding how the subtle differences in coin iconography (is the image on the coin clothed, nude, or in armor?) can be of great significance for interpretation. As Chart 4 shows, however, they were very good at noticing the details. Student A’s work shows a marked increase in the quality of argumentation from beginning (2.3) to end (7.3); she did well on part 6 as well. Student B’s work shows strong improvement in argumentation and evidence from 2.3 to 7.3.

Chart 3: Growth in argumentation, beginning to end

Chart 4: Growth in evidence, beginning to end

Chart 5: Growth in appearance and professionalism of work, beginning to end

Chart 5 shows that they attended to the appearance and professionalism of their work ever more as the semester advanced. The dip in AA3 and AA6 might be explained by the fact that those two assignments were focused on ancient works of visual art; students had a tough time figuring out how to cite these and what to call them in their own prose.

Charts 6-7: Progress in similarly-themed assignments

Of the seven analytical assignments, two were worksheets teaching how to approach reference material (AA1 and AA5; we did not complete AA5; analysis of student work on these worksheets is not represented in this portfolio because of the particular nature of the assignment); two focused on ancient literature as a source for myths (AA2 and AA4), and two focused on ancient visual imagery as a source for myths (AA3 and AA6). Charts 6 and 7 show student progress within each area of focus (literature, art).

Chart 6: Growth across literature assignments

Chart 7: Growth across art assignments

Chart 6 is in line with the steady improvement we see elsewhere in the data.

Chart 7 shows a setback in terms of interpreting ancient art. As with the other data, we see here the effects of the difficult AA6 on student performance. The literature assignments and art assignments ask students to engage in roughly the same level of critical investigation. It is possible, even likely, that this class is the first time most students are asked to interpret art. Their skills are more fragile than the more comfortable skills in interpreting literature, especially when dealing with coins, which represent a genre of ancient art with which students are less familiar.

Charts 8-11 show progress from part 1-3 of each assignment, both overall and by learning goal.

Charts 8-11 reveal that the three-part structure of each analytical assignment did not itself yield an observable improvement in performance, but that does not mean the structure didn’t help students achieve proficiency for the third part. We designed the assignments in the hope that the first two parts would help students achieve at higher levels on the third part. The data does not speak to the realization (or not) of our hopes; part 3 is always more complex and challenging than parts 1 and 2, so a flat score across the three parts might yet represent growth, since students are asked to do more complex work from part 1 to part 3. In any case, we are certain the three-part structure to the assignments helps students remember the details of a text or work of visual art and engage more deeply with it.

Student A’s and Student B’s work illustrates the phenomenon well. In both cases, the first discussion post (part 1) was much stronger than the resulting short paper (part 3); they did not yet know how to deepen their thoughts across the assignment, and so their scores dropped considerably from part 2.1 to part 2.3. By the seventh assignment, their initial discussion posts were stronger (7.1, and the students maintained that strength into the more complex third part (7.3).

Chart 8: Overall growth from Part 1 to Part 3 on each of 5 assignments

Chart 9: Growth in argument from Part 1 to Part 3 on each of 5 assignments

Chart 10: Growth in use of evidence from Part 1 to Part 3 on each of 5 assignments

Chart 11: Growth in mechanics of writing from Part 1 to Part 3 on each of 5 assignments

Chart 12 represents not the average scores, but the numbers of scores at each level (Excellent, Proficient, Emergent, Beginning, categories taken from the rubric used for grading) in each of the six analytical assignments we tracked. Each assignment has a total of 216 scores in our spreadsheet (part 1a, 1b, 1c, 3a, 3b, 3c, *36 students).

Chart 12: Scores across achievement levels

This chart reveals two interesting points. First, the number of excellent scores jumped from 65 to 94 over the semester (a trend line would show a higher endpoint). Second, the number of poor scores (emerging, beginning) shrank from 40 to six. It is reasonable to assume that students grew at equal rates (i.e., the beginning became emerging, the emerging became proficient, the proficient became excellent), but another datum that is difficult to represent in a chart nuances this picture. Of all those scores, more of them fell than rose (roughly 60% fell, 40% rose). Combined with the overall increase in high scores, we can tentatively conclude that the scores that fell did so only mildly while the increases were fewer but much larger. This might mean that weaker students made strong gains (e.g., from emerging to excellent, or beginning to proficient). If this is true, it is very heartening. This is a large catch-all course for the freshman class. Helping those who come to college with a weaker starting point in critical analysis is an aim of our course. Charts 13-14 shed more light on this picture.

Student A’s and Student B’s work shows the trend that students with weak beginnings gained much in this intervention. Their gains from beginning to end are dramatic. Student D’s work is an example of a student whose work dropped in quality toward the end of the semester; the decrease is not as steep as the increase in Student A and Student B. Finally, Student C’s work demonstrates that a student who began the course with strong skills maintained that level, benefitting perhaps less from the changes we’ve made in the course than those with weaker skills at the outset.

Charts 13-14 compare DFW rates from 2015 (Year 2) with those of 2016 (Year 3)

CLSX 148 is a popular course for first-year students. It is also a very difficult course, the critical demands of which often outweigh students’ expectations. It has traditionally had a high DFW rate—the number of students who earn a D or F or withdraw from the course. In years past we have tried various ways of improving the success of those at-risk students, including extra advising, tutoring, and Supplemental Instruction (SI). All of these interventions have helped (SI most of all), but 2016 boasts the most dramatic results we have yet seen for this class. Chart 13 shows the DFW rates by numbers of students between Fall 15 (a typical year) and Fall 16. Chart 14 shows the DFW rate as a percentage of overall enrollment. Since the F15 class was smaller and the number of DFW students higher, the percentage difference is quite striking. It should be noted that students who earn an F, and even most of those who earn Ds, simply do not complete the majority of the coursework; they are “soft drops,” students who put in minimal effort and likely should have withdrawn, but do not withdraw within the deadline. Our work transforming this course did not result in students dropping before the deadline. Rather, our interventions resulted in roughly half of the at-risk students staying and completing the work. Another way to look at it is this: the 17% drop in DFW rates from Year 2 to Year 3 reflects roughly 45 students in F16. The new design enabled 45 students to complete the class with a grade of C or better.

Chart 13: DFW rates by raw numbers, 2015 and 2016

Chart 14: DFW rates by percentage of enrollment, 2015 and 2016


^Back to top^

Reflections

Professors Gordon, Scioli, and Welch

We have learned over the three years of the project that shifting assessment in the course from periodic exams towards multiple scaffolded analytical assignments yields three major results:

  1. students are more consistently engaged with the course material across the span of the course;
  2. student performance is generally better (drop in DFW rates, notable progress across assignments over time), and
  3. students are engaging with the assigned material at a higher level of critical analysis, thus fulfilling a central goal for the course.

The combined scaffolding of assignments and the repetition of assignment-types results in better overall grades. Doing the preparatory work for each assignment does not ensure high grades on the written assignments (Part 3), but it does likely result in better scores than would have resulted without the preparatory work. Perhaps we still need to build a “writing practice” component into the preparatory work, or an opportunity for students to revise one of the written assignments according to instructor comments and reflection upon the rubric as a way to ensure that students are learning form their mechanical mistakes. Rather than giving them opportunity to engage with the same type of assignment twice, which is a step towards improvement, perhaps they also need a chance to reflect on and revise existing work.

Seven analytical assignments proved burdensome to students and graders. Since assignments 1 and 5 (analyzing secondary sources) are unlike the others (which ask for analysis of primary sources), we might consider removing them from the course or presenting those skills in a different assignment sort. Given the fact that the course is an entry-level course, the material on the first assignment (analyzing websites) is more important than on the fifth (analyzing a scholarly article).

Students in the course struggle with analyzing works of visual art, especially coins, which are an unfamiliar medium. Perhaps we should build into daily lectures and daily assignments (such as Blackboard quizzes) more analysis of iconography of a variety of media to prepare students for this component of the course, which features in two of the six analytic assignments and the capstone assignment (AA7).

The terminology of the art assignments has been somewhat confusing for students; calling it a “virtual museum” makes more sense in art-intensive courses than it does other formats, and the attempt to call it a “blog” did not help students understand what was being asked of them. Future instructors of the myth class can take one of two approaches: they can retain the language of the virtual museum or art blog with plenty of opportunities in class to discuss what that means, or they can use the language of a traditional paper that focuses on art.

Mild declines and significant increases in scores over the course (chart 12) might suggest that students whose skills are weak at the beginning of the course make great strides in improving their work as the course goes on, while students who start out strong either continue at the same level, or decline minimally.

Students continue to struggle with the three major skills for which they are assessed on writing assignments: Argument, Evidence, Mechanics. However, their abilities in these areas do show steady improvement over the course of the semester. We believe that the frequency of the analytical assignments and the opportunities for students to repeat tasks and work on each type of assignment twice during the semester contributes significantly to the improvements in student work over the semester.

Refining the grading rubrics has led to greater consistency in grading among instructors and has allowed us to align the headings of the rubric more closely with the goals we want students to achieve. As part of this ongoing refinement process, we realized that a heading for “organization” of ideas, and thus assessment of this skill, was less relevant for students in this type of course. We thus reduced its bearing on the grade for the assignment. We will eliminate “organization” from future rubrics used for assessment of writing assignments in this course.

Aligning the preliminary tasks to match up more closely with the deeper analytical work that students are asked to do on Part 3 of the assignments has likely led to better written work, especially that submitted by less experienced students. Students who struggle with the mechanics of writing would at least have had a chance to gather evidence and establish a thesis statement prior to writing the prose of the written paper.

Contact CTE with comments on this portfolio: cte@ku.edu.

 


^Back to top^

Print Version

Click below for PDFs of all documents linked in this portfolio.


Events