Closing the Loop: Identifying Program Goals, Assessing Learning Outcomes, and Re-Examining Practices—KU Intercampus Program in Communicative Disorders (2008)
An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.
This portfolio describes the KU Intercampus Program in Communicative Disorders’ process of identifying and assessing student learning goals in their master’s program. The project involved the development of program goal rubrics, a plan for formative assessment and remediation of student skills, the documentation of student learning through electronic portfolios, and the implementation of a new summative assessment.
Background
The University of Kansas Intercampus Program in Communicative Disorders (IPCD) is a unique program providing a graduate education to students in speech-language pathology and audiology. Changes to certification requirements in 2005 pushed the faculty to identify learning outcomes and conduct assessments in order to measure how well students achieved those outcomes. Specifically, this portfolio focuses on the master of arts in speech-language pathology program.
Implementation
The faculty fully embraced documenting learning outcomes. During the 2003–04 academic year, the department identified five global program goals and created a standard syllabus format, allowing for better alignment of course and program goals. A sub-group of faculty also developed two rubrics, one for diagnostic skills and one for treatment skills. Prior to 2003-2004, student assessment was primarily summative; however, in revising the program, we felt interim formative assessments were desirable. Therefore, the faculty developed a series of eight questions, and in 2004–05 the first group of students completed this formative assessment. After collecting data from three years, we created three further supporting documents for the 2007–08 year: a formative exam reviewer summary, an action plan form, and a list of suggested consequences for specific weaknesses.
Student Work
Beginning in 2007–08, we initiated a pilot portfolio project where students archived a portion of their course and clinical work each semester. Students also completed two self-assessments during the project: one that was used formatively to help guide their second year of studies, and the second to help identify areas for continued learning while transitioning into a career. A second pilot group began in 2008–09, during which we addressed questions raised during the first pilot; for example, we introduced an Artifact Description Sheet to prompt self-reflection.
Starting in 2009–10, we expanded the portfolio project to include all new master’s graduate students. Further changes were made, such as taking the portfolios online, and we also implemented a final, formal summative assessment. The faculty sub-group designed a rubric, advisor checklist, and formal instructions for students to support this assessment. As a whole, the faculty felt the new exam structure would provide a better indication of each student’s abilities. In order to better assess the whole process, we embedded electronic surveys, for both faculty and students, at the mid- and end-program points.
.
Reflections
Following completion by the first whole group to participate in the portfolio project, the faculty have seen generally positive results. At the formative stage, students were beginning to self-advocate more, and at the summative stage the new assessment method more efficiently provided information regarding student knowledge. Although there are still some points to address, the main question now is: where do we go from here? The possibility of using data from the summative assessments to reexamine our admissions process, as well as the departmental curriculum, will allow us to provide our students with a stronger program. Letting the end result speak to the larger picture gives us the opportunity to come closer to truly “closing the loop.”
The University of Kansas Intercampus Program in Communicative Disorders (IPCD) is a unique program that provides a graduate education to students in speech-language pathology and audiology. The KU IPCD combines faculty, research, and clinical facilities of two departments: Speech-Language-Hearing: Sciences and Disorders, located on the Lawrence campus, and Hearing and Speech, located at the KU Medical Center in Kansas City. While the Department of Speech-Language-Hearing in Lawrence is responsible for undergraduate education, all graduate degrees are conferred through the joint Intercampus Program and include the master of arts in speech-language pathology, the doctor of philosophy in speech-language pathology and in audiology, and the doctor of audiology (Au.D.). This portfolio focuses on the master’s in speech-language pathology program.
Changes in certification requirements for speech-language pathology that took effect in 2005 led to changes in the master’s speech-language pathology program. Specifically, our national organization previously mandated that graduate students needed course work in certain topic areas, as well as clinical experience with specific communication disorders. However, the national organization was dissatisfied with this approach, feeling that merely providing experiences did not guarantee learning and, therefore, they decided to move to a system of accountability whereby individual programs had to identify learning outcomes and assess how well their students were achieving those outcomes. In this portfolio, we share our experiences in attempting to identify and measure student learning-outcomes in the hope that other departments can learn from our experiences.
Establishing learning outcomes
While the impetus for documenting program learning outcomes was provided by a mandate from our national organization, the faculty fully embraced this mandate. The faculty was motivated to identify and measure learning goals as a more effective means of evaluating program success and identifying areas for revision. During the 2003–2004 academic year, a small faculty focus group identified five global program goals that they then took before the full faculty for discussion and revision. The final five program goals served as the basis for the KU ASHA Knowledge Standards Grid. A review of course and clinical experience offerings revealed that there were a variety of ways for students to meet program goals in each content area. In addition, the faculty agreed on a standard syllabus format so that more detailed course goals could be cross-referenced with the program goals, making alignment of course and program goals more transparent.
Developing program rubrics
While these initial program goals were helpful in defining the program’s mission and providing alignment across content areas, the faculty viewed identifying levels of performance as particularly critical, because this allows tracking of student growth across the two-year master’s program. A small group consisting of the three authors, representing both campuses, both training environments, and a diversity of content areas, was formed to draft a program rubric, containing more detailed program goals as well as levels of performance from novice to advanced. Their work resulted in the creation of two rubrics, one for diagnostic skills and one for treatment skills, which were subsequently presented to the full faculty. For each skill, four levels of performance were determined by thinking about the types of performance we were likely to see (or would want to see) as students move through the program.
Measuring learning outcomes
Prior to 2003–04, student assessment primarily was summative in nature, with students completing projects and exams as a part of individual course requirements, participating in final oral examinations as required by the university, and, at the end of their program, taking a national examination. Our national organization mandated that graduate programs require formative assessments in addition to the more familiar summative assessments. Moreover, our faculty recognized that weaknesses identified at the end of a student’s program in the final oral examination could not be remedied before the student graduated. For this reason, interim formative assessments were viewed as desirable, so that weaknesses could be identified earlier in a student’s program and remedied before the student graduated. Thus, our faculty on both campuses met on several occasions to develop formative questions and grading standards for a written formative examination to be administered at the program midpoint (i.e., end of Year 1).
Developing formative assessments
After some trial and error, the faculty developed a series of eight general questions, which were written so that students were expected to apply academic information in clinical and professional settings. Students selected four of the eight questions to answer, completing the formative assessment at the beginning of the third semester of their graduate program. Each student’s responses were then graded by two faculty members using a continuum rubric (i.e., does not meet the standard, meets the standard, exceeds the standard) and individual academic advisors subsequently met with students to review their performance. Students in the 2004–05 academic year were the first group to complete this formative assessment. Although faculty spent considerable effort in developing the exam, we initially neglected to consider ahead of time how we might evaluate collective student performance. Faculty decided that a more formal means of data collection was needed so that we could examine students’ collective performance to identify any weaknesses that occurred across students and consider ways of revising the program to bolster performance in those weak areas. When the second and third formative examinations were administered in 2005–06 and 2006–07, we collected quantitative (i.e., percentage of students meeting or exceeding standard) and qualitative data (i.e., example student papers from two high-, two average-, and two low-performing students) for discussion.
After comparing the data from these two academic years, the faculty created three additional documents for the 2007–08 academic year. The first was a formative exam reviewer summary, where each exam reviewer summarized the student’s strengths and weaknesses and suggested potential consequences. The second was an action plan form generated by the advisor with the student, summarizing strengths and weaknesses and creating a clearly outlined and binding plan to address weaknesses. The final document was a list of suggested consequences for specific weaknesses that advisors could draw from in creating action plans, which assisted in maintaining uniformity across advisors while still allowing for individualized plans.
After evaluating student performance over three years, the faculty felt that the formative exam was effectively identifying weaknesses in student learning and providing a means of addressing weaknesses prior to graduation. However, there also was consensus that the formative assessment rubric’s feedback might be too general and that the current evaluation tool was not tapping the full range of entry-level skills. The faculty decided that specifying program goal levels of performance would aid in tracking student progress, and that a formative assessment that was a more integral component of the program would be more desirable. As noted earlier, we developed the Diagnostic Skills Rubric and the Treatment Skills Rubric to identify levels of performance for a more specific set of program goals; these program rubrics can be adapted to individual courses or clinics. Beginning in the 2007–08 school year, the faculty have utilized these rubrics in both their original format and with course/setting-specific modifications.
Pilot portfolio project, group 1
For the 2007–08 academic year, we initiated a pilot portfolio project as a means of making the formative assessment a more integral component of the graduate program. We recruited nine students, or approximately 30% of the 2007–08 entering graduate students, for this project. These students archived a portion of their course and clinical work each semester of their graduate program. They completed self-assessments twice during the project: at the end of the first two semesters and again at the end of the second year. The students and their advisors used the first self-assessment to create an action plan to improve weaknesses and, possibly, help guide the second-year, off-site experience. The goal of the final self-assessment and portfolio review was to help the student identify areas for continued learning as she or he transitions to a career. We used this pilot project to determine whether portfolios are a richer source of formative assessment and should be required of all students. We also examined how best to aggregate information across individual portfolios as a means of evaluating the program’s success as a whole and for identifying areas for future revision.
In addition, the pilot project served to help establish guidelines for future portfolios. Reflections during the project’s early stages revealed changes necessary in order to improve both the consistency and the quality of the students’ archived materials. These changes included: increasing the number of artifacts included; allowing the inclusion of “outside” artifacts; addressing the selection of appropriate artifacts; and shifting the portfolios into an electronic format.
Pilot group 2
In order to address some of the changes suggested by the first pilot group, as well as to gain broader faculty feedback, we conducted a second portfolio pilot group during the 2008–09 academic year. We particularly addressed questions related to the portfolio artifacts. We increased the number of required artifacts to 11; these artifacts have to cover both students’ clinical and course work and evaluation and treatment, as well as a range of ASHA identified areas. We also prompted reflection by introducing an Artifact Description Sheet to be completed for each item. Additionally, we gave students the choice of substituting in one outside artifact (e.g. a research experience artifact). Instructions for the second student portfolio pilot group were further tweaked prior to broad implementation.
For the second pilot group we also expanded the number of students by increasing the number of participating faculty advisors. This allowed the original faculty participants to be sure that the written guidelines were straightforward and that different faculty would interpret them in the same way.
From pilot group to whole group
Beginning in the 2009–10 academic year, we expanded the portfolio project to include all in-coming master’s graduate students. At the start of the student’s first year, they participated in an orientation during which we introduced the portfolio project. While we did keep the pilots’ general structure, with students archiving work each semester, creating an action plan at the end of the first year, and completing self-assessments at two stages, we again made several changes. One such change involved taking the artifacts online. Rather than saving hard copies of each artifact, students must upload the materials to a personal KU Keep Toolkit portfolio. Students are required to share their portfolios with their advisor at both the mid and final point (i.e. end of the first and end of the second years). They must also create a CD archive, which corresponds to the KU Keep portfolio, and they submit this CD at the final exam.
We also implemented a final, formal summative assessment done by three-person faculty exam committees, to replace the portfolio standing alone. We designed a rubric to employ in this assessment, as well as a final checklist for the student’s advisor. Although students did not have access to the rubric beforehand, they did have an instruction sheet indicating both the steps leading towards the final exam and expectations during the exam. For the final assessment, the students choose three artifacts to present to their committee, and students create a PowerPoint presentation in which they must present each item. During the exam time, the students give their prepared presentation and orally respond to faculty questions, such as why they chose the items, how they feel they are representative of their work, et cetera. We feel that, in comparison with the pre-2003 oral exam structure where questions were not situated within a larger context and could cover any area, the revised final assessment provides the faculty with a better indication of each student’s true abilities.
In order to assess the portfolio process as a whole, we embedded electronic surveys for both faculty and students to complete after each step of the process (i.e. mid and end point). Although response rates to the formative survey originally seemed low, after further prompting, students did go back and complete it. The summative survey is also accessible through the department’s website, and students are prompted to complete it following their final exam. In the future we would like to possibly implement another survey, one that could be completed six to nine months after the student has graduated. This would allow us to know how the experience, particularly the reflection required, has been helpful to them in their professional careers.
In 2003 our department embarked on a process of documenting student learning that included the entire faculty involved in the speech-language pathology master’s program. Feedback and reflection on goals, assessment, and student performance throughout the process led to revision at a variety of stages. While our reflections here will focus on the wider implementation of the student portfolio project, see our discussion of "Closing the Loop" for a fuller reflection on earlier stages of our project.
Following completion by the first whole group to participate in the portfolio project, we have identified several minor changes. In the future, we will possibly revise our instructions for choosing artifacts. A common theme in the summative student survey was a feeling that the artifact descriptions became redundant and that “it would be a great place to add a section [where] students could cite evidence that supported the artifact.” While we feel the former response might speak more to the individual student’s reflectiveness, we might wish to revisit the artifact description sheet to better reflect the importance of evidence. Similarly several students noted that they felt the summative assessment time limit was too short; in the future we will provide students with an extra minute to present each artifact.
Following the first summative exams of the portfolio project, the faculty as a whole met and reflected on the project. One question raised was: what happens if a student were to fail his/her exam? While failure in only one area might be addressed with a “makeup” assignment, failure in more than one area would, necessarily, carry different consequences. The answer still needs to be determined. However, in order to avoid a failure, we may wish to implement a “practice” artifact presentation as part of the formative assessment. At this point students could receive both feedback on presentation and greater guidance on what constitutes a “good” or “bad” artifact. We also discussed allowing students to observe others’ summative exams. Opening the exams would not seem to provide an advantage to any single group of students and could possible help curb anxiety; upon observing an exam students might better understand the expectations and, therefore, more confidently enter their own exams.
While we did not gather much student feedback from the pilot group, faculty response was positive, with the faculty seeing the process as allowing for a different conversation with students, particularly during advising meetings. Moving from pilot to whole group, through the formative assessment faculty noted a shift in student thinking, particularly when creating their action plans, with the students beginning to advocate for themselves more about what kind of experience they want. Faculty response following the summative assessment also was generally positive. Overall, in comparison with the previous oral exam structure, faculty felt the new method provided more information regarding each student’s knowledge in a more efficient manner.
Student response on the summative survey was mixed. While some did not like the process, others felt it useful and a good way to encourage self-reflection. Speaking specifically to the summative exam format, one student noted, “I found the experience to be very rewarding and [it] gave me additional confidence going into the interview process. I think it is beneficial for us as students to gain the experience in public speaking and thinking on our feet.” While it is gratifying to see the positive comments, especially when the students connect their studies to their future careers, we were equally glad to have received more critical comments as they will help us not only gauge where we are but also how we might move forward.
The question now is: where do we go from here? First, we agree that the department needs at least another full cycle of the portfolio project as it is before large changes are considered. However, we will go ahead and implement the more minor changes (e.g. added time) previously mentioned. One possible next step would be to examine the link between summative exam performance and admissions data to determine whether any measures at admission would predict final program outcomes. This could be useful in guiding our student selection at admissions because it might suggest weighting certain admission data more heavily (e.g. GPA in the major may be a better predictor of program outcome than GRE scores). Another possible step is to use summative exam data to reflect back on the departmental curriculum and class and clinic expectations. Questions to address at that point would include: Do students have a chance to give in-class presentations where they have to justify why they are doing something a certain way? Are students receiving useful feedback on assessments or just a letter grade, and how can addressing this positively influence the summative exam experience?
By using the data gained through the portfolio project to reflect on the state of the program as a whole, we are better able to identify needed revisions that will lead both to a better learning experience for the students and to a stronger program. Moving through the process and letting the end result speak to the larger picture allows us to come ever closer to truly “closing the loop.”