Review: Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment

Home / Article Reviews / Review: Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment

Reviewed by Jeremiah A. Ho, University of Massachusetts School of Law

Review: Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment and Summary Judgment Success, 22 J. Leg. Writing Inst. ___ (2018).

SSRN Article Link

By Shaun B. Spencer and Adam Feldman

Because I teach first-year law students, the spring semester always brings back recollections of the first-year legal writing experience, culminating with the classic appellate brief assignment.  When I came across my colleague Professor Shaun Spencer’s latest article, co-written with Adam Feldman, a J.D./Ph.D post-doctoral fellow at Columbia Law, I thought it was apt to share—not just because the article’s main handle pertains to the topic of legal writing, but also because of what it implies for law teaching generally.  The article is titled, Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment Success, at it is forthcoming this year from the Journal of the Legal Writing Institute.

At the start of Spencer and Feldman’s article, the piece seems exclusively relevant for practitioners because it presents us with a statistical relationship between the readability of summary judgment briefs to the rate of favorable case outcomes.  Thus, in terms of readability, proficient legal writing is a valuable commodity in law practice according to their results.  However, the academic implication is also clear because legal writing is also what law schools teach.  The idea of effective legal writing lies at the heart of various legal writing textbooks and numerous pieces of scholarship on the subject.  Since Langdell, legal writing classes have been welded into the law school curriculum.  And ABA accreditation standards reinforce that tradition of teaching legal writing by mandating that students take writing courses throughout their law school careers.  In this way, Spencer and Feldman’s article is one to observe.  Their empirical study underscores the value of instruction and competency for the art and skill of legal writing.

Judges might hesitate to divulge that the quality of a practitioner’s writing can influence judicial decision-making of a case—since this revelation would clash with the idea that cases are resolved based on adjudication of law and facts, rather than on the skills and proficiency of practitioners.  However, several existing scholarly studies that have already examined appellate brief writing and correlated subjectivity and readability to favorable outcomes.  In their study, Spencer and Feldman now bring the empirical lens to state and federal trial court briefs in order to determine whether a positive association exists between brief readability and case outcomes.  Here, they frame two hypotheses.  First, “[i]ncreased brief readability will lead to a greater likelihood that a party will prevail on a motion for summary judgment.”  Secondly, “[w]hen the moving party’s brief is more readable than the non-moving party’s brief, the moving party will be more likely to prevail on a motion for summary judgment.”  With these hypotheses raised, they embark to test their hunches.

Spencer and Feldman use cognitive theory to explain their hypotheses.  Because the brain processes familiar and unfamiliar information differently, the fluency of information presented affects whether a person would process new information associatively or analytically.  The more fluent the information presented is, the more one tends to process associatively, and vice versa—the less fluent, the more one processes analytically.  In writing, fluency can be affected by formatting and “the look” of the document—as predicated for example by font, color, and spacing—as well as readability-related characteristics such as length and complexity of sentences, grammar, and vocabulary.

From here, the authors outline the research method they designed that includes their reasoning for examining summary judgment briefs, a protocol for selecting briefs for their sample, and the definition and coding of variables.  In total, they looked at 654 total briefs in 327 cases from both federal and state courts.  What Spencer and Feldman found was that “[w]hen the moving party’s brief was more readable, the moving party was typically more likely to prevail[.]” Also, “[m]oving from cases where the moving party’s brief is significantly less readable than the non-moving party’s brief to the opposite situations, the likelihood that the moving party prevails on the motion for summary judgment more than doubles from 42% to 85%.”  Both findings appear consistent with their initial hypotheses.  The authors explain alternative theories for these results but ultimately dismiss those theories for the correlation they reached.

For lawyers and advocates, this study presents an important focus on effective and presentable writing in litigation.  However, although Spencer and Feldman’s study does not prove a causal relationship between readability of briefs and favorable case outcomes, the authors do call out that the strong correlation raised here does bolster “the ever-increasing emphasis on legal writing instruction in law school curricula, the ABA standards on law school accreditation, and continuing legal education programs.”  Thus, this study lends credibility for elevating the profile and status of legal writing colleagues in law schools across the country.

In reading Spencer and Feldman’s article, I was reminded of the old schoolhouse phrase, “neatness counts”—but here perhaps it’s “readability counts” that is more appropriate.  With readability highly influenced by the proficiency of legal writing, what this study eventually provokes in me as a doctrinal law faculty member can be crystallized into two thoughts.  First, I have a question: does readability correlate to final examination grading or am I as the grader of my final exams doing something else in the grading process (such as assessment) that is conceptually and functionally different from the adjudication process?  Secondly, if readability does correlate to exams (even if I am assessing competency rather than adjudicating cases), then knowing how to affect fluency and readability would be an intrinsic part of the art of lawyering, factoring into the choices and strategies a legal thinker makes in advocacy.  I would see that, other than teaching doctrine, imparting such skills would be part of my job as well.  Teaching it effectively would be another way to help my students engage with the law and help empower them.  Ultimately for me, it is this correlation, drawn from Spencer and Feldman’s study, that resonates most with me.  In this way, beyond “readability counts” for practitioners, their study is also very significant for the teaching of effective lawyering.

 

Institute for Law Teaching and Learning