The Compounding Effects of Assessment

Home / Article Reviews / The Compounding Effects of Assessment

By Lindsey P. Gustafson, UA Little Rock, William H. Bowen School of Law

If you’ve found your way to the Institute of Law Teaching and Learning, you are likely already a believer in formative assessment. We do have empirical evidence that formative assessment improves student learning in law: Two recent studies have shown that students who received individualized feedback during the semester outperformed students who did not on final exams, and not just in the class where they received the feedback but in every single class they were taking.  [1] One study’s authors note the “likelihood of this occurring by chance is one in 256.”[2]

But as we add formative assessments to students’ semesters, we must consider how we are altering the demands on their time. The middle of the semesters, which have traditionally been the playground for the Socratic Method and for legal writing assignments, may now be filled with a variety of assessment activities, and some of them may dominate students’ time in a way that impacts students’ learning in other classes. When our assessments interfere with students’ participation in other classes, or vice versa, the inferences that we draw from our assessments about student learning may not be valid. And an assessment that provides invalid data is worse than no assessment at all. Consequently, we must all consider our assessments as students experience them, “holistically and interactively.”[3]

How do we deeply coordinate assessments and avoid an assessment system that instead overwhelms students, clutters or fragments their learning, or discourages them early in their first semester? We must coordinate beyond shared calendars, starting in our own classrooms by ensuring that our own assessment activities, as a slice of the student-time pie, are designed with and justified by best practices that encourage an assessment’s validity. In a recent article, I’ve identified five relevant best practices:

  1. Make the assessments’ alignment with learning goals transparent to students and to other faculty members with whom we intend to coordinate: A clear alignment with learning goals helps students understand how the assessments will move them towards learning goals, and helps them make informed decisions about their allocation of time. A clear alignment also allows us to clearly communicate our assessment choices to other faculty members.
  2. Use rubrics to create a shared language of instruction: Once we identify learning goals, rubrics help us refine our communication with students. They see how they will be assessed, and we see with specificity what they have learned.
  3. Ensure the assessments encourage student autonomy: One particularly harmful potential outcome of a tightly orchestrated assessment system is that it may overly dictate student decisions, rather than facilitate student autonomy. Our assessment systems should build students’ feelings of autonomy, competence, and relatedness, which are fundamental to learning.
  4. Set high expectations and display confidence that students can meet those expectations: Students prone to maladaptive responses to feedback are likely to be overwhelmed and discouraged by frequent assessments. Explaining our high expectations and displaying confidence in students can help address these tendencies.
  5. Regularly review the entire assessment system, paying particular attention to students’ ownership of their own learning within the system.

When we ground our formative assessment decisions in best practices, we are better able to communicate our decisions to students, and better able to more deeply coordinate with other faculty members.

[1] See Daniel Schwarcz & Dion Farganis, The Impact of Individualized Feedback on Law Student Performance, 67 J. Legal Educ. 139, 142 (2017) (finding that formative assessment improved performance on final exams for students with below-median entering credientials); Ruth Colker et al., Formative Assessments: A Law School Case Study, 94 U. Det. Mercy L. Rev. 387 (2017) (finding the same); Carol Springer Sargent, Andrea A. Curcio, Empirical Evidence That Formative Assessments Improve Final Exams, 61 J. Legal Educ. 379, 383–84 (2012) (finding that formative assessment improved performance on final exams for students with above-median entering credentials); Andrea A. Curcio, Gregory Todd Jones & Tanya M. Washington, Developing an Empirical Model to Test Whether Required Writing Exercises or Other Changes in Large-Section Law Class Teaching Methodologies Result in Improved Exam Performance, 57 J. Legal Educ. 195, 197 (2007) (finding the same); Andrea A. Curcio, Gregory Todd Jones & Tanya M. Washington, Does Practice Make Perfect? An Empirical Examination of the Impact of Practice Essays on Essay Exam Performance, 35 Fla. St. U. L. Rev. 271, 280-82, 302-306 (2008)(finding the same).

[2] Schwarcz, supra note 1, at 142.

[3] See Harry Torrance, Formative Assessment at the Crossroads: Conformative, Deformative and Transformative Assessment, 38 Oxford Rev. of Educ. 323, 334 (2012) (noting that “assessment is always formative, but not necessarily in a positive way”).

Institute for Law Teaching and Learning