Every Decision Counts (EDC)

Developing and researching the impact of an alternative format for multiple-choice assessment
Leonard, William J.
NSF ROLE-0106771
Starting date: 

Standard multiple-choice assessments, for which there are 4 incorrect and exactly one correct choice, are efficient to implement but imprecise and difficult to interpret. To assess student course work, teachers generally accept the limitations of the standard multiple-choice format in order to take advantage of the low cost and minimal support needed to implement it.

For educational research, however, the standard format is unacceptable. Students guess; some of them guess correctly. Some are confident in their answers; others are not. With the standard format, we cannot distinguish between these states. Open-ended formats are therefore preferred, but open-ended questions and answers can be extremely time-consuming both to administer and to analyze.

Fig. 1_Every Decision Counts_ (EDC) is a compromise between open-ended and standard, multiple-choice formats. Students are allowed to mark more than one choice on a standard 5-bubble answer sheet, which has two consequences. The first is that students can communicate their confidence in their answer to a standard question by selecting two or more incompatible choices. The more marks they make, the less credit they earn and the less confidence they have in their answer. If they fill in all 5 bubbles, they are "just guessing."

The second consequence is that we can ask questions with possibly more than one mark in the correct answer. There are 31 possible combinations of answers, and therefore it is almost impossible for students to guess the correct one. Possible scores are 0, 1, 2, 3, 4, and 5, so there is a more precise categorization of students between completely correct (5/5), and completely incorrect (0/5). We also have a just guessing category for students who mark all 5 choices. As compared to standard multiple-choice questions, a lower percentage of students are completely correct (earning all 5 points), yet the average score tends to be noticeably higher, because nearly all students are earning at least 3 points (out of 5). In other words, in practice, almost nobody earns 0 or 1 point, and few earn only 2 points.

Fig. 2 Fig. 3

We would like to study the accuracy of EDC by comparing students' answers in this format to their answers in a more open-ended format. We would also like to study the potential of EDC for measuring problem-solving proficiency, which cannot currently be measured with the standard multiple-choice format. In 2006 or 2007, we anticipate being able to secure funding to study EDC, as well as identify and analyze question styles, develop and make available a database of exemplary questions, and work toward wider implementation of EDC by teachers and researchers who use multiple-choice assessments.

As a public service, we have created a website for instructors who want to use EDC. It includes web-based tools for making the scoring process simple.

EDC is a spin-off from our RRA project.