Understanding Before Acting: Teaching Students to Assess Needs


An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.

 

A professor implements changes in a graduate-level class in the higher education administration program so that students develop beginning competencies in program evaluation and assessment.

 

- Susan Twombly (2012)

ELPS 885, Program Evaluation and Assessment in Higher Education, is a required course primarily taken by second year students (and a few doctoral students) in the two-year, professional master’s degree in the higher education administration program. Students in this program seek entry-level jobs in student service areas in colleges and universities. It is very important that students develop some beginning competencies in the area of program evaluation and assessment, and yet this course is typically one of their least favorites. Students choose the higher education administration major because they want to work with students, not because they want to develop research skills. For these reasons, the course poses particular challenges.

In past years, there was a unit on needs assessment, but no specific written assignment that forced individual students to engage with the material or demonstrate to me that they had learned the basic concepts (or that enabled me to identify those areas in which they were struggling). For this iteration, I implemented a written component that showcased the new unit goal. By exploring a timely and real issue based on real institutional data, students not only learned how to use such data as a basis for targeting but they also saw that this is a real task, and not just a theoretical one, done by institutions.

In order to gauge student learning of the needs assessment unit outcomes, several activities were used during the course itself, including a graded written assignment and a group project on needs assessment. The final assessment piece was how students translated mastery of needs assessment concepts from the course to another setting – the comprehensive exam.

There were some aspects of the revised course that were less successful than desired. The course remains one of the students’ least favorites. However, the work on the graded assignment was quite good and met my expectations. In the comprehensive exam, I saw that ability to transfer learning in the course to a new situation and new prompt was limited. As a result, I found a new book and restructured the larger class project to more closely follow the process a real-life university task force would follow.

ELPS 885, Program Evaluation and Assessment in Higher Education, is a required course primarily taken by second year students (and a few doctoral students) in the two-year, professional master’s degree in the higher education administration program. Students in this program seek entry-level jobs in student service areas in colleges and universities. All master’s degree students in the program hold assistantships in an office on a campus (residence hall assistant complex directors, academic advisors in schools or the university advising center, admissions, multicultural affairs, etc.), or full-time jobs in a college or university. Students come to this program with a wide variety of disciplinary backgrounds. Although a few students have had full-time work experience prior to enrolling, most students come directly from an undergraduate program.

The goals of ELPS 885 are for students to come away from the course with:

  1. an understanding of what program evaluation and assessment are, the differences between them, how they are used in higher education, and why they are important;
  2. knowledge of the main kinds of evaluation questions to ask about programs, what purposes these questions serve, and when in a program’s lifecycle it is best to ask them; and
  3. beginning knowledge of what data to collect to answer each type of question, how to collect and analyze data, and how to communicate the results.

These broad goals are accompanied by more specific learning outcomes for each major unit of the course.

At some point, graduates of this program will have to evaluate their programs and assess student learning outcomes. Colleges and universities are full of programs that were implemented by well-meaning individuals. These programs are rarely evaluated to determine whether they have been implemented faithfully or for their effectiveness. In these times of shrinking resources, evaluation becomes important in order to ensure that resources are not being invested in ineffective programs. On the other hand, evaluation activities are essential to improving programs that do exist. Units outside strictly academic programs (the ones our students work in) are increasingly being called on to demonstrate what students learn as a result of participation. Therefore, it is important that future student affairs professionals learn about student outcomes assessment and how to develop and carry out these assessments.

I wanted to focus on this course because it is very important that students develop some beginning competencies in the area of program evaluation and assessment, and yet this course is typically one of their least favorites. Students choose the higher education administration major because they want to work with students, not because they want to develop research skills. They have limited backgrounds in statistics, qualitative research, and research methods in general. For these reasons, the course poses particular challenges. Specifically, the section on needs assessment has proved quite difficult for students. Rather than merely providing solutions, it forces them to ask questions. Although I made significant changes to the entire course, I chose to focus my portfolio on the particular unit about needs assessment because it has posed such challenges to students in the past. They are accustomed to implementing programs while not asking thoughtful questions about the nature and scope of the problem a program is trying to solve (or whether a problem even exists) before developing or implementing it. The focus of this portfolio is the changes made to the learning outcomes, assignments, and assessment of student learning for this particular unit of the course.

An important goal for me is that students see the real-world application of the material in this course and that it is not just a dry subject that we foist on students for the sake of doing so. By using a timely and actual issue with existent institutional data, students will understand the relevance of assessment activities, and they will actually do the assessments (learning how to use such data as a basis for targeting, and seeing it as a practical task and not just a theoretical one done by institutions).

In past years, there was a unit on needs assessment, but no specific written assignment that forced individual students to engage with the material or demonstrate to me that they had learned the basic concepts (or that enabled me to identify those areas in which they were struggling). For this iteration, I implemented a written component (pdf) that showcased the new unit goal.

Prior to completing this written assignment, students were asked to do the following:

  1. Read a chapter about needs assessment available to them on Blackboard.
  2. Come to class prepared to discuss the nature of the retention/graduation problem for men and women. That is, which sex is likely to be retained at a higher rate? (using data from www2.ku.edu/~oirp/attendance/index.shtml).
  3. In class students worked together in small groups to respond to the prompts in question #2 (which was a small piece of the overall written assignment described above).
  4. In-class prompts asked students to identify, based on their analysis of data, populations they would target for interventions.

Both the written assignment and the in-class group assignment relied on a specific chapter on needs assessment and on real institutional data available from the Office of Institutional Research and Planning’s (OIRP) website. The data on the OIRP website is of the type that students would use at KU or at another institution. The problem of retention and graduation rates is also a common institutional issue.

This assignment is related to a couple of other courses in the program. For example, students could draw on PRE 715 Understanding Research in Education to know how to read the data tables. Students could also draw on several of the courses they have taken as college students (inside or outside of the program) in order to understand what additional kinds of information they might need about students in order to fully understand the problem (e.g., family income, education level of parents, etc.). Additionally, from other PRE courses students would bring in knowledge about what kinds of interventions work for particular populations to improve retention and graduation rates.

In order to gauge student learning on the outcomes identified for the needs assessment unit, several activities were used during the course. A final assessment piece was how students translated mastery of needs assessment concepts from the course to another setting: the comprehensive exam.

Students were graded on the written assignment described above using a rubric (pdf). I gave them feedback on their work and a chance to fix, correct, or modify the assignment based on the feedback for inclusion in their final portfolio.

In addition, students were given an in-class sample of the comprehensive exam question (pdf) and were asked to work in groups to answer the prompt about needs assessment. Although not formally graded, this gave me a general idea of whether they could translate learning from one dedicated assignment to a more complex problem, similar to the one they would face in the comprehensive exam.

The third form of assessment came from the comprehensive exam and practice exams that students completed in Spring 2013.

In general, student performance on the written assignment was better than expected (see work samples below). Out of 32 students, 8 earned B/B+, 8 earned A-, and 15 earned A. There is, however, room for improvement. Several of the B papers completely missed the purpose of the assignment, going right to recommendations before learning about the problem. Other characteristics that distinguished the B from A papers were organization, grammar, and syntax. I was looking for improvement in the final portfolio in which students were able to revise earlier papers. Among the A papers, the work was really quite good. The components of the assignment were addressed correctly, thoroughly, and communicated professionally. Of course, in past years, the only method of assessment used was a mid-term “exam” case study and the comprehensive exam, where performance on the needs assessment task was really quite variable and lower than what we’d like.

There were some aspects of the revised course that were less successful than desired. However, the work on the graded assignment met my expectations. In earlier years, I would give an in-class case and ask students to design a needs assessment or at least tell me what questions they would ask, and invariably they would struggle with even knowing what I meant. In the future, one of the changes I would make to the assignment is to either eliminate the prompt asking them to define what the problem is, spend far more time talking with students about the socially constructed nature of problems (i.e., that attrition could be defined as a good thing or as a problem if the economically disadvantaged students are leaving), rephrase the prompt, or simply assume that the problem has been defined for them as in this case: attrition is a concern because it results in wasted resources on the part of the institution and student, contributes to lower rankings for the university, etc. Few of the students, even the best papers, addressed this aspect of the assignment and defined what the problem was.

“Proof” of improvement was tested in the graduating students’ comprehensive exam, and the results were disappointing. Ability to transfer learning in the course to a new situation and new prompt was limited. As a result, in the following semester I restructured the larger class project to more closely follow the process a real-life university task force would follow. At the end of the Spring 2014 semester, I graded students’ group projects, and it seems that the new assignment has resulted in deeper understanding.

2012