Validating multiple choice test items
Common student errors provide the best source of distractors. Sophisticated test-takers are alert to inadvertent clues to the correct answer, such differences in grammar, length, formatting, and language choice in the alternatives.
It’s therefore important that alternatives When “all of the above” is used as an answer, test-takers who can identify more than one alternative as correct can select the correct answer even if unsure about other alternative(s).
When “none of the above” is used as an alternative, test-takers who can eliminate a single option can thereby eliminate a second option.
In either case, students can use partial knowledge to arrive at a correct answer. Plausible alternatives serve as functional distractors, which are those chosen by students that have not achieved the objective but ignored by students that have achieved the objective.
The reliability is enhanced when the number of MC items focused on a single learning objective is increased.In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.Validity is the degree to which a test measures the learning outcomes it purports to measure.Some readers point out an error, share an idea or a new MC format, ask a question, or simply offer support to the effort to improve test items.Second, a scientific basis for test item writing has been slow to develop (Cronbach, 1970; Haladyna & Downing, 1989a, 1989b; Haladyna, Downing, & Rodriguez, 2002; Nitko, 1985; Roid & Haladyna, 1982).