## By Lindsey P. Gustafson, UA Little Rock, William H. Bowen School of Law

Elizabeth Ruiz Frost, __Feedback Distortion: The Shortcomings of Model Answers as Formative Feedback__, 65 J. Legal Educ. 938 (2016)

Elizabeth Ruiz Frost’s article *Feedback Distortion: The Shortcomings of Model Answers as Formative Feedback *was published in 2016, but it continues to affect the way I design and critique my students’ assessment activities—both in my classroom and across our curriculum—as we respond to the ABA’s mandate for more formative assessment. Professor Frost posits that, while providing a model answer (either student- or professor-authored) in place of individual feedback may allow for *efficient* formative feedback, in most situations it does not provide *effective* formative feedback. She points to evidence that weaker students tend to misinterpret model answers and are less capable of accurately assessing their own work against the model.

In her article, Professor Frost gives reasons beyond efficiency a professor may have for giving feedback through a model answer, including that learning through a model answer encourages a students to self-teach, a skill they will rely on throughout their career; model answers provide feedback quickly, while students are still primed for it; model answers will not alienate students with personalized, negative comments; and model answers are what students clamor for. Professor Frost explains why each of these reasons is inadequate to justify what she describes as a shift in the learning burden: the professor avoids learning how to provide effective feedback by forcing a student to learn how to improve from a model.

Model answers provide effective formative assessment only if students are able to compare their work with a model and see what they did wrong. Professor Frost roots the assumption students do this in the “Vicarious Learning and Self-Teaching models of education, which have pervaded legal teaching since the nineteenth century.” In fact, whether this feedback is effective depends first on the characteristics and mindset of the learners, and second on the type of knowledge the professor is assessing. As to the first variable, because weaker students are less self-aware, they face a “double curse”: “[t]he weakest students, who lack the ability to distinguish between the standard exemplified by a model answer and their own work, will learn the least from a model answer. So the students who need feedback most for continued learning will get the least.”

The second variable is relevant because model answers can provide effective feedback for questions of factual knowledge and concept identification. But any assessment that requires higher-order thinking—where students need to demonstrate analysis, for example—model answers are not as effective. Students instead need elaborative feedback.

Professor Frost ends her article with methods for using model answers to give feedback that best promote student learning: (a) providing an annotated model answer together with individualized feedback; (b) creating opportunities for remediation and reassessment for students after they have reviewed model answers; (c) using a student’s own work as a model answer; (d) requiring students to review model answers in small groups instead of individually; (e) providing multiple sample answers for review, including both strong and weak samples; and (f) focusing on metacognitive skills throughout so that students can better self-evaluate against model answers.

Several of her methods have worked for my students. Recently, I’ve noticed the first method recommended above working across the curriculum: students learn more from a model answer when the same skill (here, answering a midterm essay question) is tested in another course and personalized feedback is given there. In short, learning in one course is improved by the efforts of professors in other courses.