Creating Teachable Moments in a Research Methods Class—Paul Atchley
A psychology professor addresses the challenges of teaching a large research methods course that doesn't have a separate lab component. Through the use of a workbook that reinforces course concepts, students are able to apply research methods to novel situations, thus expanding their overall understanding.
Research Methods in Psychology (PSYC 310) is a required course for KU's psychology major and is often required for admission to graduate programs in psychology. It is a junior level course, and in our current curriculum it is taken after statistics. A methods course is typically taught as a lecture course with a lab. The size of the lecture course varies depending upon institutional resources, but lab sections are usually in the 20-30 student range.
When I arrived at KU, there was no research methods course. There were three advanced methods courses available, each with enrollments of 15-20 students maximum, but with 1200 majors, that meant the vast majority of students did not have a methods course when they graduated. The department quickly approved the creation of a methods course when I proposed it, but there was a dilemma: There were not enough resources to use the model of a typical methods course with both lecture and lab sections. Furthermore, there were not enough instructors to offer multiple lecture sections per semester. This meant that the class was going to enroll 200 or more students each semester...
My initial goal was to create a “lab in a lecture” type of course. I hoped to recreate the learning moments that occur in a lab within the class itself, so I created a workbook in which students did ten assignments and three reports to give them a more hands-on experience. The hands-on aspect was the critical goal, and the intention was to reinforce points already made through lecture. The reports initially were from a supplementary lab manual that had a CD-ROM with canned experiments for the students to run.
The initial goal of the workbook assignments was to reinforce topics that had been covered in class during lecture. Over time, I found some assignments to be more effective at supporting in-class discussion and teachable moments than others, and I have made changes to the less-effective assignments in order to provide more structure and generate more discussion.
Student performance was and still is based upon both mastery (two exams) and effort (workbook assignments). Both aspects have equal weight in terms of points. Students must exhibit a level of effort on each to earn a grade. For example, to earn an A, students need 90% of all points, but they must earn at least 80% of the mastery points and do eight of ten workbook assignments and all reports. I typically write very challenging exams to prevent ceiling effects in exam scores. This not only allows students to accurately gauge how much they have retained from the lectures, but also allows effort points to make up for this increased level of difficulty. An analysis of the interaction between effort and mastery grades for 450 recent students indicated that the use of effort and mastery-based points is not inflating or deflating grades to a significant degree, and students are exhibiting high rates of effort.
Overall I am happy with the construction of course. Given the limits of the course (i.e., large class size and no labs), the integration of the workbook assignments and the use of other assignments to emphasize and create teachable moments works well. The grading scheme analysis indicates that is it not inflating grades, and the high rate of compliance for effort-based points is getting students to class and offering me an opportunity to teach them.
In future semesters, I would like to do more demonstration-based projects in class as an alternative to lecture. Furthermore, I would like to integrate clickers into the course, to facilitate class quizzes and polling for student learning. While this would require a major overhaul to the course, I have found that students often cannot articulate clear answers to questions that they are initially certain they understand. Thus, I intend to do more in-class performance assignments next semester to foster these skills and capitalize on these teachable moments.
^Back to top^
Psychology is the science of human behavior. As a science, we have a variety of methodological tools for measuring and quantifying human behavior and cognition. Research Methods in Psychology (PSYC 310) is designed to provide working knowledge of those tools, as well as encourage the application of scientific thinking to everyday, real-world issues. Thus, as outlined in the course goals, successful completion of this course should result in students gaining the skills needed to evaluate scientific data and differentiate between science and pseudoscience.
PSYC 310 is a required course for KU’s psychology major and is required for admission to about three quarters of the U.S. graduate programs in psychology. It is a junior level course, and in our current curriculum it is taken after statistics. A methods course is typically taught as a lecture course with lab sections. The size of the lecture course varies depending upon institutional resources, but the lab sections are usually in the 20-30 student range.
When I arrived at KU, there was no research methods course. There were three advanced methods courses available, each with enrollments of 15-20 students maximum, but with 1200 majors, that meant the vast majority of students did not have a methods course when they graduated. The department quickly approved creation of a methods course when I proposed it, but there was a dilemma: There were not enough resources to use the model of a typical methods course with lecture and labs. Furthermore, there were not enough instructors to offer multiple lecture sections per semester. This meant that the class was going to enroll 200 or more students each semester.
There were several challenges inherent to the structure of this course. One of the challenges was the effective teaching of research methods without a lab. Another challenge was class size—because this course is required for all psychology majors, the class size is around 200 students per semester. Finally, it was a challenge to establish personal goals for my students: Should the course be structured such that it prepares students for graduate school, should the focus be on helping students learn to evaluate claims in the real world, or should the course balance those two goals simultaneously? Therefore, I have tried to use research examples that take place in everyday life, in an attempt to bridge those two goals.
^Back to top^
To address some of the challenges that I listed in the Background section, I have implemented several policies (see PSYC 310 Policies) to encourage student participation and learning In addition, I have created a workbook of assignments and reports for students, in an attempt to increase their experience with the application of research methods without the benefit of a separate lab section.
My philosophy for the workbook (pdf) assignments is that student learning takes place during the discussion portion of the assignment. Higher levels of student work seem to prepare students to come to this discussion with more to say. Subjectively, student discussion during Spring 2007 seemed to be deeper and broader, meaning more students were able to take part in discussion. Objectively, one consequence of this was that we moved through lecture topics more slowly. A metric of increased discussion was the necessity to remove one chapter from the midterm section and move it to the second half of the course.
The initial goal of the assignments was to reinforce topics that had been covered in class during lecture. I thought that I could lecture, have students engage with the material, and then re-lecture in a new way using the assignment as a starting point. This technique generally worked, but in some cases where the material in the initial lecture was fairly simple, the re-lecture was not a great use of class time.
Analysis of workbook changes to achieve course goals
There have been three assignments that I have felt have not been very effective. These were “Library Research” (Assignment 4), “Developmental Designs” (Assignment 7), and “Correlational Designs” (Assignment 8). Each assignment supported a learning objective of the course, but the assignments all failed to generate meaningful class discussion, and therefore failed to provide teachable moments in class. As can be seen in the student ratings of the assignments from Fall 2006 (below), it is particularly apparent that discussion was lacking in response to “Assignment 4: Library Research.”
These three assignments have been modified to increase their ability to support in-class discussion. An example of this is “Assignment 4: Library Research,” of which you can see the original and revised versions of this assignment. For this class, the ability to access databases and evaluate information sources is very important, so deleting the assignment was not an option. The original version tied in with a research report, but that created difficulty when scheduling the assignment. My goal was to have students access on-line databases to support the research process in Report 1. What the assignment did not capture was evaluating science as presented in the popular press. It was also boring for the students and led to very low ratings of discussion on student feedback.
The assignment was modified by providing a common research topic for all students. In addition, the teaching assistants and I chose a topic that we hoped would generate interest among the students. To that end we picked an analysis of facilitated communication (FC). To add to student investment, I asked students to imagine themselves as clinicians making a recommendation to a parent. This capitalized on the heavy number of students who typically indicate interest in becoming clinicians or counselors. I also tied it back to the “Ways of Knowing” assignment by asking them to evaluate multiple sources of information and to gather information from library as well as popular sources. The students also would typically encounter a reputable university with a FC research center, requiring them to engage in a discussion of whether academia always produces “objective” results. Finally, I asked them to consider issues of professionalism by including information on their professional societies’ view of FC.
As the student rating data (below) reflect, this assignment led to MUCH more student discussion. What had been a ten-minute discussion now filled an entire class. I found the discussion was exceedingly rich in teachable moments and provided a view into the sort of complexities of information evaluation we would expect a good scientist to be able to tackle. This model was similarly used to redesign other assignments.
^Back to top^
Overall class performance—analysis of mastery and effort structure
I was interested in assessing the relative influences of mastery grades, as assessed by exams, and effort grades, as assessed by homework assignments, on overall student performance. (For a detailed description of the mastery and effort policy, see the PSYC 310 Policies). The interaction of effort and mastery grades was analyzed for a recent 450 student sample (one fall, one spring and two summer sections). Only students who completed the course were included, and the results of that analysis can be seen in the following chart:
A few observations are worth noting:
- The mastery grades closely matched the final grades overall (within 3% points), suggesting the use of effort and mastery-based points does not inflate or deflate the grade significantly.
- Students seemed to be performing at a very high rate of effort. Almost two-thirds of the class reached an A level of effort. Almost 85% of the class was at an A or B level of effort.
- There were a small number of cases (9.1%) where mastery performance exceeded effort performance and led to a reduced grade from mastery grades alone. In most cases, effort led to an increase in a student’s final grade (53.1%) or matched his or her exam performance (37.8%).
Assessment of assignment performance
I have provided several examples of student work on Assignment 3—the Ethics Assignment. This assignment asks students to attempt to redesign the Milgram experiment in an ethical way. In the Milgram experiment, Stanley Milgram tested the degree to which a person would obey an authority; in this case the authority figure was the experimenter. In the experiment, a participant (the teacher) was required to deliver electrical shocks to another person (the learner) for failing to remember words, to a strong enough voltage that the learner appeared incapacitated. Though the participant did not know it, the shocks were not real and the learner worked for Milgram. Milgram found about 2/3 of his teachers were willing to comply with the request of the authority to continue to shock the learner, even after the learner seemed debilitated.
The goal is to get across the idea of cost/benefit analysis of science: We learned a lot about human behavior from the Milgram experiment, but we did so at a cost. To achieve this goal, students must analyze what was unethical with the experiment, what the experiment taught us, and try to balance cost and benefit in a new experiment. There is no way to actually “correctly” complete this assignment. I allude to this when I tell students about this assignment, explaining that they are graded on a pass/fail basis, not for a correct answer but for the thoughtfulness of their work. Students find this incredibly challenging and a bit frustrating, but the ratings of the assignment are generally very high. The discussion portion of this assignment takes approximately 30-40 minutes and makes the concept of cost/benefit in psychological research very clear.
Strong examples: The strong examples all share some common features (see Strong example 1 (pdf), Strong example 2 (pdf), Strong example 3 (pdf), and Strong example 4 (pdf)). First, they address each of the parts of the assignment. As we will see in the weak examples, some students fail to complete the full assignment. Second, they provide a level of detail that clearly demonstrates the student understands the material. The weak examples are often brief because students either do not know the answer or they are trying to disguise that fact with brevity. Third, they are simply well-written.
Weak example 1 (pdf): The responses in this example are obviously quite brief. There is not enough detail in the answers to items 1 and 2. Also, a new design idea is referenced but it is not fully described, and the remaining items for this assignment were not completed.
Weak example 2 (pdf): Again, a greater level of detail overall would be required to achieve the goal of this assignment. There is no outline of an alternate design and therefore no analysis of the ethical implications and effectiveness of the new design.
Weak example 3 (pdf): It is clear that the questions are not fully answered in this example and that more information needs to be included to demonstrate thoughtful consideration of the assignment, as well as to provide explanations that would clarify the responses that are present.
Weak example 4 (pdf): This example has several problems. First, it is incomplete since it does not answer all the items required for the assignment. Second, the “new” experiment suggested was either not different from the Milgram experiment or was simply not explained in a way that showed how it was different and why it was more ethically acceptable.
I am happy with the overall construction of the course. Give the limits of the course (large class size and no labs), the integration of the workbook assignments and the use of the assignments to emphasize and teach important points works well. The grading scheme is perceived as fair and the analysis indicates it is not inflating grades, and that the high rate of compliance for effort-based points is getting students to class and offering me an opportunity to teach them.
Course changes since developing this portfolio
The biggest course changes center on the mastery points. One major issue students raised was the concentration of mastery points in two exams. I also felt that I was not doing enough to encourage the use of out of class time toward mastery. So, to address both of these issues, I have begun using the Blackboard quiz environment to present quizzes of the chapters before the lecture portion in class. These quizzes account for 20% of the mastery grade. They can be taken to criterion (multiple attempts). The contribution of the midterm and final exam toward mastery has been reduced accordingly. This change went into effect in Spring 2008 and I will monitor it to see how it influences the final grades.
I am not always happy with the timing of giving assignments. Course lectures ebb and flow with student questions and with feedback in class that indicates the need to cover topics with more depth or to capitalize on student interest. This leads to awkward assignment timing. For example, I like to have the reports due after a weekend, so that students have time to complete the many steps required. However, we are not always ready to assign something at the end of the week given where we are in lecture, and assigning it the following week would lead to the assignment occurring after the lectures on that topic have finished.
Thus, it is not always the case that these assignments have led to teachable moments, but generally they have succeeded to some extent. The following assignments have generally been very successful: “ways of knowing”, “my life plan,” “ethics,” “identifying confounds” and “factorial designs.” Success is defined by the amount of discussion the assignment generates, the subjective quality of student work, and the ability of the assignment to lead to “teachable moments” in class.
Assignments that were less successful included “library research,” developmental designs” and “correlational designs.” These three assignments shared a common flaw: they were to unstructured. As discussed in the “Implementation” section, this has been changed by adding additional structure to the assignments by using examples for the whole class to evaluate, rather than having each student generate their own example. And to improve on the ability of the assignments to generate discussion, the examples are designed to approach moderately controversial topics. On the whole, these changes have worked very well.
I would like to do more demonstration-based projects in class. A number of concepts can be demonstrated with web-based examples that have been developed by other instructors. I have started to search for more examples to add this semester, but it requires a re-update each time the class is taught as content becomes unavailable. For example, in the section on correlations I have found a very nice “restriction of range” online demo by David Lane that provides a much more clear way to teach that concept than simple lecture.
I would like to integrate clickers in the course to facilitate in class quizzes and polling for student learning. This is a big step requiring a major overhaul of the course, and one which I am reticent to take on at this time. I have approached this via the soft option of doing more in class assignments in which student performance is collected and assigned effort points. This requires students to generate answers rather than nod their heads that they understand. I have discovered (not to my surprise) that students often cannot generate answers to concepts they are certain they understand. The clickers can do this more immediately, but are limited to multiple-choice responses. I intend to do more in-class performance assignments next semester. Having done a few this semester, however, I can already see that it will take much more time and that I may need to reduce the overall content in class and move more to out of class study.
Contact CTE with comments on this portfolio: firstname.lastname@example.org
^Back to top^
Click below for PDFs of all documents linked in this portfolio.