Using Peer Reviews to Improve Writing in Research Methods—Nancy Brady (2013)


An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.

 

To improve upper-level undergraduate student writing and performance in a research methods course, a professor restructured a final paper, adding a peer review process into the assignment.

 

—Nancy Brady (2013)

Portfolio Overview

Research Methods in Speech-Language-Hearing (SPLH 660) is a required upper-division undergraduate course. Most students will go into practice as speech pathologists; therefore, the course’s real emphasis is on research methods to be a good practitioner. I originally structured the course so that the largest portion of a student’s grade consisted of a final paper and two exams; however, ultimately, this was problematic and I was disappointed in many of the final papers. Therefore, I decided to restructure the final paper in order to address some of these concerns and to scaffold the writing process.

While I retained the basic elements of the final paper, the literature review, and the methods sections, I broke the assignment into three parts:

  • a first draft of the literature review ending in a clear hypothesis or research question;
  • a revised literature review plus the first draft of the proposed methods section; and
  • a full, revised final paper.

I also implemented peer reviews as part of the paper’s overall scaffolding, with the reviews occurring twice: after parts one and two. Draft grades were based equally on students’ peer reviews and on their own writing efforts. Finally, I employed more detailed grading rubrics for both peer reviews as well as my grading of the final papers.

The overall final paper grade distribution did shift slightly from earlier semesters, with more students earning A’s and fewer students earning low grades. Students whose papers earned A’s generally were those who took the time to meet with me and engaged with their peer reviews. Students who submitted B-level work tended to engage less with their reviews and were less likely to address all of their reviewer’s points. Students with lower-level work showed little to no engagement with reviewers, provided less feedback as reviewers themselves, and left many points unaddressed.

Overall, I was happy with the improvements in paper quality I saw during the Fall 2012 SPLH 660 iteration. Overall performance on the revised final paper project increased; this mirrored a similar shift in overall grades, with more students falling in the A and B range. The students seemed, generally, both to appreciate receiving peer feedback and to like the scaffolding, with different “checks” making the overall assignment feel more doable.

However, students also stated that they would prefer more feedback from me; in the current structure I provided feedback only before the final draft if they specifically asked. Therefore, I am considering adding mandatory “mini-conferences,” which would give every student a chance to meet with me and ask questions/receive feedback. I also plan to revisit the number of papers each student must review, possibly cutting back to each student reviewing four papers.

 

Research Methods in Speech-Language-Hearing (SPLH 660) (pdf) is a required upper-division undergraduate course, designed for students’ junior or senior year. There generally are also a small number of new graduate students enrolled.

Most students in the course will go into practice as speech pathologists. Therefore, the real emphasis of SPLH 660 is on the research methods needed to be a good practitioner. With that in mind, the course goals include how to:

  • evaluate research;
  • read, summarize, and describe research in a particular area; and
  • gather data in order to determine if therapeutic practices are effective.

I originally structured the course so that the largest portion of a student’s grade consisted of a final paper and two exams. For the final paper, the students had to create a literature review and methods section for a hypothetical study. However, I found that they had difficulty making connections in the logical sequence of creating a hypothesis to address a problem, summarizing literature that logically leads to this hypothesis, coming up with an appropriate research plan, and then proposing a suitable evaluation plan. The first semester that I assigned this paper I was disappointed in many of the final papers. Some students had trouble writing a literature review that synthesized and criticized research; a common problem was presenting a string of abstracts rather than an integrated literature review. Therefore, I decided to restructure the final paper in order to address some of these concerns and scaffold the writing process.

While I retained the basic elements of the final paper, the literature review, and the methods sections, I broke the assignment into three parts. First, there was a first draft of the literature review ending in a clear hypothesis or research question. The second part consisted of a revised literature review plus the first draft of the proposed methods section. Finally, students submitted a full, revised final paper.

I also implemented peer reviews as part of the paper’s overall scaffolding, with the reviews occurring twice: after parts one (pdf) and two (pdf). These reviews provided students with both additional feedback and the opportunity to learn from reviewing others” work. To facilitate the process I used SWoRD, software specifically developed to aid peer review. The SWoRD software anonymously assigns papers to peer reviewers and provides the scaffolding for peer feedback regarding the paper. For example, peers indicated if the author described “gaps” in previous research as part of the literature review. Once they received their peer evaluations, each student had to review the reviewers by providing “back reviews.” Draft grades were based equally on their peer reviews and on their own writing efforts.

In addition, I employed detailed grading rubrics (pdf) for both peer reviews as well as my grading of the final papers. Although I had utilized a rubric in the past, I found that it was not sensitive enough. Therefore, I added some elements with the hope of identifying specific areas that students were learning versus those that continued to be challenging. I shared the rubric with students ahead of time so that they knew exactly what I would be evaluating for their grade.

The overall final paper grade distribution did shift slightly from earlier semesters, with more students earning A’s and fewer students earning low grades.

Students whose papers earned A’s generally were those who took the time to meet with me and talk about their papers, go over additional drafts, et cetera. They also tended to engage with their peer reviews. For example, a reviewer of Student 1’s first literature review step (pdf) commented, “Because there are no gaps in the data noted, there are no weaknesses shown either.” Student 1 responded with, “I think this is a valid point by adding a sentence here and there elaborating on the instruments I could add weaknesses and have a stronger review.” For more information, please see Student 1’s final paper (pdf).

While B-paper students also engaged somewhat with their reviewers, they also received reviews pointing to larger content and technical (e.g. citation) problems. A first-step reviewer felt Student 2’s (pdf), “overall…message of the review [was] not clear,” while another noted problems with the in-text citations. Although Student 2 addressed some of these issues before submitting the final draft (pdf), the paper still contained some weaknesses, such as a disconnect between the proposed study methods and design.

Students with lower scoring papers typically showed little to no engagement with their reviewers. As reviewers themselves, these students also tended to give less feedback. Student 3 (pdf) received reviews that generally called for more detail/depth, which was not entirely addressed before turning in the final draft (pdf). Students with final papers earning C and lower grades also tended to have more technical problems, such as incorrect citation styles.

Overall, I was happy with the improvements in paper quality I saw during the Fall 2012 SPLH 660 iteration. As noted in the student performance section of this portfolio, overall performance on the revised final paper project increased. This mirrored a similar shift in overall grades, with more students falling in the A and B range; I have also included a comparison of overall grades and final paper performance for both the Spring and Fall 2012 iterations.

By utilizing a more detailed rubric, students seemed to better grasp the paper expectations. It also made the grading process much smoother for me.

The students seemed, generally, to appreciate receiving peer feedback, with fewer students vocally disliking the process. They also liked the scaffolding, with different “checks” making the overall assignment feel more doable. However, they also stated that they would prefer more feedback from me; in the current structure I provided feedback only before the final draft if they specifically asked. Due to this preference combined with the high performance trend of those students who did meet with me, I am considering adding mandatory “mini-conferences,” which would give every student a chance to meet with me, albeit briefly, and ask questions/receive feedback. The drawback of this would be possible loss of class time and deciding how to occupy the class while meeting with each student individually.

I also plan to revisit the number of papers each student must review. In the current iteration, students review five different papers; they felt that this was too many. So, in future, I may cut back to each student reviewing four papers.

2013