Tests & Writing Assignments

Although assessing student learning means more than just assigning a number or letter grade to student work, as instructors we often struggle with how best to design and administer tests and writing assignments. On this page, you will find support for exactly these issues.

Return to Teaching Resources

An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.

Creating Effective Tests

For testing to be as effective as possible for you and your students, you should pay attention to exam design and alignment with course content. If evaluation is considered only in hindsight, you are unlikely to use your time effectively and will run the risk of confusing students about how their learning was assessed.

Design tests that will measure the goals you set out to achieve in the course and be clear in your instructions. In their book titled: Effective Grading, Barbara Walvoord and Virginia Anderson recommend teachers ask themselves the following question: “By the end of the course, I want my students to be able to (fill in the blank).” Use your responses to guide assessment design.

It’s often advantageous to mix types of items (multiple choice, essay, short answer) on a written exam or to mix assessments throughout the course (e.g., a performance component with a written component). It’s also useful to ask how students in the future would be likely to use what they are learning in your course. If they’ll be expected to recognize an example of a phenomenon or category, then give them opportunities to attempt such recognition in your course. If they’ll be asked to evaluate the evidence for a claim relevant to your field, then your assignments should give them practice in such evaluation and graded feedback on their skill at it. Be sure that your assignments (both for practice and for grading) engage students in the kind of knowing or understanding that will be useful to them in future courses and in application to real life.

The process of placing a category judgment—such as a grade—on student work is rarely easy. In some cases, you can simply count the number of factual or simple items done correctly, but understanding measured by a more complex performance will need to be judged. Walvoord and Anderson (1998) claim that establishing a set of clear criteria ahead of time will make grading easier for the teacher, more consistent across students, and even faster to get done.

The key is to think through the range of feedback you want to give (e.g., points from 1 to 10 or letters from A to F) and identify how you would recognize or characterize a performance in each category. What are the strengths of an answer at each level, and what might be missing that would keep it from being in a higher category? What are the habits of mind or the kinds of knowledge demonstrated that characterize various levels of understanding?

When you engage in this kind of thinking, giving feedback should become less challenging and more efficient. If you then share those criteria with your students, they can learn more clearly what you mean by “understanding,” and there will be fewer occasions for disagreement about feedback. Ambiguous or unstated criteria are a common cause of conflict and frustration for students. Investing time up front to think through your grading criteria will pay dividends in efficiency later in the semester.

Time-limited assessments such as tests or presentations can be very stressful for all concerned. Especially in large classes that play a role in sorting out students’ future careers, there can be tension and challenges to academic honesty. Whenever possible, it’s best to create testing occasions that avoid some of the potential for cheating.

If your tests are mostly at the rote end of Bloom’s framework for understanding, students will perceive that their primary job is to memorize and regurgitate bits of knowledge; these are the kinds of tests that are most amenable to various forms of unacceptable collaboration or information transfer. Whenever possible, include items that ask students to do more than merely memorize. You can even provide the basic information in the question, but ask students to demonstrate their ability to use intellectual skills to analyze the information given. Items that involve written answers present fewer issues than items with multiple-choice formats. Exam items that are more complex in the Bloom framework are not as amenable to academic misconduct. This strategy will relieve your testing situation of some tension due to mistrust and avoid the necessity for maximum security procedures.

If you decide to use test performances that lend themselves to various forms of misconduct, then you’ll need to adopt a more skeptical attitude. There are many resources of practical advice—alternating forms, mixing bluebooks, etc. See Davis’ (2001) guidelines in Tools for Teaching for more suggestions.

Ben Eggleston, of the KU Department of Philosophy, redesigned his introductory ethics tests to avoid simply testing memorization while still making his exams easy to grade. His tests retained their multiple-choice format but required students to apply knowledge and definitions instead of simply restating them.

Unlike questions that test only memorization of definitions, the new questions, which are set up as conversations in which students are asked to choose certain statements that reflect particular ethical positions, require students to apply deeper understandings of concepts to new situations. The advantages of the conversational format are that the student has to grasp the content rather than merely recall a phrase or expression that he or she could remember from the book or class notes. Moreover, this format better tests the kind of understanding that will serve students well outside the classroom.

Old question: What is the main idea of cultural relativism?

  1. Moral beliefs vary from one culture to another.
  2. Morality itself (not just moral beliefs) varies from one culture to another.

New question: In the following dialogue, which of the following statements is incompatible with cultural relativism?

  1. Some countries rely heavily on child labor, and would suffer devastating economic consequences if they were forced to give it up.
  2. Despite these consequences, the harms to children are too great to ignore. It is wrong of those cultures to force children to work.

For more information, see Professor Eggleston’s course transformation portfolio. As a part of our Two-Minute Mentor video series, CTE Director Andrea Greenhoot and Nancy Brady, of the KU Speech-Language-Hearing Department, discuss strategies for creating effective assignments

Robert Magnan (1990) suggests taking your students on a “test drive” to help them prepare for your exams. When you design a test, save items you decide not to use. Make a practice test with these items along with instructions for the exam, including the percentage or points for each section or exercise, and have students complete this practice test in class.

This technique has two advantages: You can test your exams and expose students to instructions. If an exam structure is weak, you can improve it before the exam. If instructions are unclear, you can clarify them.

The test drive should include only a sample of test items. If there are several possible answers to a given question, indicate which are better and why. If you’ve included essays, ask students to list the essential points they think should be included when they answer the essay question, and then evaluate their responses.

Writing & Designing for Assessment

Writing assignments can help students exhibit their mastery of material, synthesize course material, and better understand the goals and direction of a course, thus increasing overall retention and understanding of material. However, just as you are asking for effective writing from your students, you must also make sure that your own assignments are written effectively. 

John C. Bean (2011) writes that “essay exams send the important pedagogical message that mastering a field means joining its discourse, that is, demonstrating one’s ability to mount effective arguments in response to disciplinary problems.”

If students’ writing is to improve, students must understand what you expect in written work. One strategy is to provide students with copies of written work from previous years’ classes, without any instructor comments. Have students rank that work from best to worst, and ask the class to list which factors they think distinguish A-level work from work that is at a B, C, D or F level. After that, explain your grading criteria and discuss them with the class. When you do that, students are more likely to internalize these criteria and apply them to their own work.

Similarly, you might have students work with a primary trait analysis-designed rubric. With that type of rubric, the instructor determines criteria for each score within the rubric and describes criteria in a handout included with the assignment or the syllabus. Having students work with the rubric to assess another students' work will help them understand the assignment and hopefully aid them in their own work. Here's a sample rubric (doc) that you can modify to fit your assignments. 

As Stevens and Levi explain in Introduction to Rubrics, rubrics "divide an assignment into its component parts and provide a detailed description of what constitutes acceptable or unacceptable levels of performance for each of those parts." Although we generally associate rubrics with writing projects because they limit the amount of subjectivity inherent in assessing student writing, rubrics "can be used for a large variety of assignments and tasks: research papers, book critiques, discussion participation, laboratory reports, portfolios, group work, oral presentations" and other types of work. No matter what kind of assignment you ask students to complete, providing them with a rubric increases your transparency as an instructor and provides students with a framework for how their work will be evaluated, and graded. In keeping with the principles of backward design, we encourage you to create your rubric before you introduce the assignment, so that your rubric reflects the goals of the course as well as the level of performance you expect from students.

Another approach to helping students improve their writing is to have them practice writing cogent thesis statements in small groups, thus gaining insight and guidance from others. Allowing students to revise their work after receiving feedback from you and peers is also important, as it allows students to learn from their mistakes and helps them learn strategies for future writing assignments.

As you design your writing projects, consider how you will implement peer review as a way to scaffold writing and revision. As Bean explains in Engaging Ideas, in peer review workshops "students read and respond to each other's work in progress. The goal of these workshops is to use peer review to stimulate global revisions of drafts to improve ideas, organization, development, and sentence structure." Peer review workshops require careful planning, however: "unless the instructor structures the sessions and trains students in what to do, peer reviewers may offer eccentric, superficial, or otherwise unhelpful advice." On the other hand, peer review can be beneficial to the writer receiving feedback as well as the student providing the critique because flaws in one paper often take a similar form in another. No matter your approach, it is important to provide students with guidelines and expectations for their work during peer review. 

Another method for increasing processing of information through in-class essays is including time for pre-writing and synthesis. Some ways to achieve this include providing students with a list of all potential essay questions before the day of the exam, requiring students to create and bring to the exam a resource sheet for each essay question, which they can use to answer the essay questions, or assigning take-home essay exams. All these methods allow students time for deeper critical thinking and organization of their arguments.

In Connecting the Dots: Developing Student Learning Outcomes and Outcomes-Based Assessments (Stylus Publishing, 2016), Ronald S. Carriveau offers several guidelines for creating clear and effective multiple-choice (MC) questions.

Regarding terminology, Carriveau defines the overall test “item” as including the question the students must answer as well as “all the answer choices and any special instructions or directions that are needed.” Next, the question can also be called the “stem” or “stimulus.” The different answers students can select are referred to as “choices” or “options,” and the options that are incorrect are referred to as “distractors” or “foils.” This terminology will be helpful as you read the following guidelines Carriveau offers for creating MC questions and answers.

Guidelines for Writing the Item Stem

  • Write the stem as a question

  • Make the stem as clear as possible so the student knows exactly what is being asked

  • Place any directions that you use to accompany text or a graphic above the text or graphic

  • Word the question positively. Avoid negatives such as “not” or “except.”

  • Make sure that something in the stem doesn’t give a clue (cue) that will help the student choose the correct answer.

  • Don’t make the item opinion based.

  • Don’t write trick questions.

Guidelines for Writing Answer Choices

  • Traditionally, four (or more) answer choices are used, but in most cases, three options will work better.

  • Make sure that only one of the answer choices is the absolutely correct answer.

  • Ideally, the distractors should be equal in plausibility.

  • Ideally, keep the length of answer choices about equal.

  • Avoid using the choice “none of the above” or “all of the above.”

  • Avoid the choice “I don’t know.”

  • Phrase answer choices positively as much as possible.

  • Avoid giving clues to the right answer in the item options.

  • Using a stem that asks for a “best” answer requires careful wording for the distractors as they all may have a degree of correctness (thus the term “best”) but the correct answer has to be the “best” choice.

  • Don’t make a distractor humorous.

  • Don’t overlap choices. This applies primarily to ranges in numerical problems.

  • Keep the content of the answer choices as homogeneous and parallel as possible.

Guidelines for Item Format

  • Choose an item format style and follow it consistently.

  • Avoid true-false and complex MC formats.

  • List the answer choices vertically rather than horizontally.

  • Use three-option MC items.