Review: “All of the Above: Computerized Exam Scoring of Multiple Choice Items Helps To: (A) …”

Home / Article Reviews / Review: “All of the Above: Computerized Exam Scoring of Multiple Choice Items Helps To: (A) …”

By Heidi Holland, Gonzaga University Law school

Article: “All of the Above: Computerized Exam Scoring of Multiple Choice Items Helps To: (A) Show How Exam Items Worked Technically, (B) Maximize Exam Fairness, (C) Justly Assign Letter Grades, and (D) Provide Feedback on Student Learning” by Lynn M. Daggett, 57 J. Legal Educ. 391 (Sept. 2007).

All of us are working to integrate formative assessment pursuant to the mandates of ABA Standard 314, and multiple choice exams are one way to do it. However, few law teachers have training in how to use multiple-choice questions effectively.  While it is certainly not a recent article, Professor Lynn Daggett’s article is instructional and encouraging to both the novice and experienced teacher.  In her article “All of the Above: Computerized Exam Scoring of Multiple Choice Items Helps To: (A) Show How Exam Items Worked Technically, (B) Maximize Exam Fairness, (C) Justly Assign Letter Grades, and (D) Provide Feedback on Student Learning,” Professor Daggett introduces her readers to the strengths and limitations of multiple choice questions, explains what they can be used to measure, shows how data can be used to guide the assignment of letter grades, and provides specific examples of the concepts in use.

Professor Daggett begins by explaining “core psychometric concepts” including validity (construct, predictive, and content) and reliability.[1]  She also sets the framework for assessment: that it can be criterion-referenced or norm-referenced. Criterion-referenced tests measure whether a student can demonstrate mastery, whether it be of a skill or a concept.  A norm-referenced evaluation, on the other hand, compares a student’s performance against her peers.[2]

Effective instruction and assessment both require intentionality on the part of the instructor, and “law teachers have to decide what the purpose(s) of the [test/question] is, including whether the [test/question] is designed to separate out levels of learning within a class (norm-referenced evaluation), or to measure whether students have mastered specific concepts or skills (criterion-reverenced evaluation).”[3] With the instructor’s goal identified, Professor Daggett then explains how computerized scoring can be used as an assessment tool.

With the mean, median, mode, standard deviations and z-scores in mind, Professor Daggett offers instructors methods of assigning letter grades.  Nonetheless, data does not substitute for professional judgment, and Professor Daggett explains how she uses the data to guide her discretion always keeping in mind that we “perform somewhat of a gatekeeper function in assigning low letter grades, particularly in first year classes. . . . [A] grade of D+ or D from [her] means the student has demonstrated barely adequate learning of course concepts and skills and more generally should not continue in law school unless grades in other courses reflect considerably more mastery.”[4]

As previously noted, Professor Daggett’s article is not just informative; it is instructional. She explains how to decipher and use item analysis, which is part of a standard computerized scoring report from a multiple-choice exam. The article details how to judge the efficacy of test questions, provides an example of the specific information shared with students after an exam, and encourages all of us to provide feedback to students about concepts they have not yet mastered. With thorough explanations and three appendices, you should come away with a better understanding of how to use multiple-choice questions fairly to assess student learning.

 

[1] Pg. 394

[2] Pg. 399

[3] Pg. 401

[4] Pg. 401