Legal Writing

Home / Posts tagged "Legal Writing"
ChatGPT Exercise in the LRW Class

ChatGPT Exercise in the LRW Class

By Sandra Simpson, Professor Gonzaga University School of Law

Professor, Ashley B. Armstrong of the University of Connecticut School of Law has written a draft article examining artificial intelligence known as ChatGPT and exploring its implications for legal writing classrooms.  This draft is titled Who’s Afraid of ChatGPT? An Examination of ChatGPT’s Implications for Legal Writing can be found at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4336929.  This artificial intelligence is different because it creates content for the requester, including attorneys and law students.

After reading Ashley’s draft, I reached out to her to discuss this new resource.  She provided the assignment laid out below for me to use in my classroom. The power of this assignment is that it provides a way for LRW professors to have open discussions with our students about the ethical use of artificial intelligence as a student and as a professional.

Classroom Assignment

TO:              Associates
FROM:      
Ashley Binetti Armstrong
DATE:       
January 24, 2023
RE:            
  ChatGPT

On November 30, 2022, OpenAI launched ChatGPT (Chat Generative Pre-trained Transformer). ChatGPT is an Artificial Intelligence interface that can generate human-like text in response to user queries. I would like you to test and analyze how ChatGPT performs on a series of legal research and writing tasks. I also would like to know what concerns ChatGPT might raise related to attorney ethics. Please complete the activities described and respond to the questions below.

1. Insert the following prompt into ChatGPT:

Write a legal memo based on the following facts and questions: We have a new client, Priyanka Patel. Patel was recently involved in a swimming accident at the Ellenbosch estate in Blueridge, CT. This remote Estate is owned by Caroline Ellenbosch and features a large lake, trails, playground, and cliffs known to be great for rock climbing. I briefly interviewed Patel this afternoon. She has a lot of expenses related to the injuries she suffered, and we need to figure out if she has grounds to sue the landowner. I am not sure if this is possible, and I would like you to investigate whether Ellenbosch has landowner immunity. Limit your research to Connecticut law. You may use unreported cases. Use proper Bluebook citation form and office memo format. Facts: On September 4, 2022, around 12pm Patel drove to the Estate. She paid a $10 parking fee to park onsite. On her climb, Patel passed at least two signs that warned against use of the lake and against swimming/diving. When she reached the summit, she dove into the lake and landed in a shallow spot. She broke her right leg and fractured her tailbone. The parking lot is owned by Ellenbosch and considered part of the estate. She charges a $10 fee per car to park in the lot. There are some free parking spots on the street, “but they are too far away for it to be worth it. Parking on site is so much more convenient, obviously.” Patel estimates that the free street parking is about .5 miles from the site. Terrain is uneven, uphill, no crosswalks, no sidewalks that she recalls. There is a small bike rack on site. There is no public transportation to the property. It seems like almost all visitors pay the parking fee.

2. Insert this prompt, next:

Can you provide a list of 10 other cases I should review?

3. Insert this prompt, next:

Using the cases from the previous response, please write a legal argument for Patel’s case, following the CREAC structure.

4. Using Westlaw or Lexis, look up the cases that ChatGPT provided in its responses. More specifically, if ChatGPT provided the following case “Czepiga v. Town of Manchester, 884 A.2d 1202 (Conn. 2005),” please tell me a) whether any case by that name exists on Westlaw/Lexis; and b) what result you get when you search for “884 A.2d 1202.” Include the list of cases and answers to questions a and b, below.

5. If ChatGPT provided any statutes, or any other sources in its responses, please look those up on Westlaw or Lexis. List the source and what the source is about (e.g., title of the statute and a 1-2 sentence summary), below.

6. Describe any observations about ChatGPT’s response to question 1, above. Consider: the accuracy of the response (researching on Lexis or Westlaw), the structure of the response (compared to what you’ve learned about successful legal writing), and anything else you would like to note.

7. Describe any observations about ChatGPT’s response to question 3, above. Consider: the accuracy of the response (researching on Lexis or Westlaw), the structure of the response (compared to what you’ve learned about successful legal writing in this course), and anything else you would like to note.

8. Please provide a short (~2-4 sentence) summary of the following Model Rules of Professional Conduct: 1.1, 1.3, 2.1, 3.3, and 4.1. You should review the text of the rule and the comments to the rule.

9. What concerns about rules 1.1, 1.3, 2.1, 3.3, and 4.1 might be raised when attorneys use ChatGPT?

10. Please provide a short (~2-4 sentence) summary of Model Rule of Professional Conduct 1.6. You should review the text of the rule and the comments to the rule.

11. What concerns about rule 1.6 might be raised if attorneys use ChatGPT? Under what circumstances?

 

My Classroom Assignment Reflection

We spent all 70 minutes of class working our way through the ChatGPT exercise provided by Ashley Armstrong in her draft article and the assignment above.  I started by asking the class what AI they use regularly. I made it clear to the students that I was not judging them, but rather was curious about what they are using. This opened up an honest discussion about artificial intelligence.  The students were only using Grammarly, Spell Check, brief checkers, etc., but not using any product that is producing original work like ChatGPT. That conversation was really interesting.

Then we got into ChatGPT and the worksheet. I had them work in groups and report out. They were particularly shocked by how bad the AI writing was and how much better they felt about their emerging skills. I then had them do the original research that was assigned to the ChatGPT in the assignment. Many forgot how to come up with original search terms, limit their jurisdiction, etc. Thus, we backed up and reviewed the research process.  Though this was a bit of a surprise to me, it was good to get that feedback and good to help them review the research skills.  Once they finished the research, they were mortified at how wrong the AI was. Again, they felt pretty good about their research skills compared to the computer.

After that, I assigned one MRPC to each team to look up, review, and discuss how ChatGPT implicates the rules. (I had the groups read the rule and the comments). The students really engaged in this part of the discussion. They learned the MRPC while applying them to using AI in their practice.  Many of the rules were surprising to them, such as, most students had never considered that posting or entering client data into an internet resource would be a breach of confidentiality. (It’s a whole new world)

The last thing we did was discuss what our class would like to do with this type of tech going forward. They universally agreed that ChatGPT was so wrong that it is dangerous to use, and that it would take more time to check its work than just do the work themselves. They said they would like to see how ChatGPT does with the projects we work on this semester. I am not sure what that looks like going forward, but we are going to start by feeding their fall final research assignment prompt into the AI and see what ChatGPT comes up with. It should be noted that I read an article that Westlaw and Lexis are looking to partner with ChatGPT so it has access to the database. Oh boy. The students were interested to see where this goes.

At the end of the class, we agreed that their work must be their own. If they want to use their resources in tandem with other sources such as this (just like using a secondary source) that is up to them, but they are responsible for the end product and its accuracy.

I am pretending I know what this looks like in the end, but for now, it felt good to talk about it. The key here is getting ahead of it rather than reacting to it.

Moving Forward

If any of this listserv’s readers decide to use this assignment, please let Ashley Armstrong know what your class did and your reflections.  We are facing this new technology together!

Review:  Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment

Review: Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment

Reviewed by Jeremiah A. Ho, University of Massachusetts School of Law

Review: Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment and Summary Judgment Success, 22 J. Leg. Writing Inst. ___ (2018).

SSRN Article Link

By Shaun B. Spencer and Adam Feldman

Because I teach first-year law students, the spring semester always brings back recollections of the first-year legal writing experience, culminating with the classic appellate brief assignment.  When I came across my colleague Professor Shaun Spencer’s latest article, co-written with Adam Feldman, a J.D./Ph.D post-doctoral fellow at Columbia Law, I thought it was apt to share—not just because the article’s main handle pertains to the topic of legal writing, but also because of what it implies for law teaching generally.  The article is titled, Words Count: The Empirical Relationship Between Brief Writing and Summary Judgment Success, at it is forthcoming this year from the Journal of the Legal Writing Institute.

At the start of Spencer and Feldman’s article, the piece seems exclusively relevant for practitioners because it presents us with a statistical relationship between the readability of summary judgment briefs to the rate of favorable case outcomes.  Thus, in terms of readability, proficient legal writing is a valuable commodity in law practice according to their results.  However, the academic implication is also clear because legal writing is also what law schools teach.  The idea of effective legal writing lies at the heart of various legal writing textbooks and numerous pieces of scholarship on the subject.  Since Langdell, legal writing classes have been welded into the law school curriculum.  And ABA accreditation standards reinforce that tradition of teaching legal writing by mandating that students take writing courses throughout their law school careers.  In this way, Spencer and Feldman’s article is one to observe.  Their empirical study underscores the value of instruction and competency for the art and skill of legal writing.

Judges might hesitate to divulge that the quality of a practitioner’s writing can influence judicial decision-making of a case—since this revelation would clash with the idea that cases are resolved based on adjudication of law and facts, rather than on the skills and proficiency of practitioners.  However, several existing scholarly studies that have already examined appellate brief writing and correlated subjectivity and readability to favorable outcomes.  In their study, Spencer and Feldman now bring the empirical lens to state and federal trial court briefs in order to determine whether a positive association exists between brief readability and case outcomes.  Here, they frame two hypotheses.  First, “[i]ncreased brief readability will lead to a greater likelihood that a party will prevail on a motion for summary judgment.”  Secondly, “[w]hen the moving party’s brief is more readable than the non-moving party’s brief, the moving party will be more likely to prevail on a motion for summary judgment.”  With these hypotheses raised, they embark to test their hunches.

Spencer and Feldman use cognitive theory to explain their hypotheses.  Because the brain processes familiar and unfamiliar information differently, the fluency of information presented affects whether a person would process new information associatively or analytically.  The more fluent the information presented is, the more one tends to process associatively, and vice versa—the less fluent, the more one processes analytically.  In writing, fluency can be affected by formatting and “the look” of the document—as predicated for example by font, color, and spacing—as well as readability-related characteristics such as length and complexity of sentences, grammar, and vocabulary.

From here, the authors outline the research method they designed that includes their reasoning for examining summary judgment briefs, a protocol for selecting briefs for their sample, and the definition and coding of variables.  In total, they looked at 654 total briefs in 327 cases from both federal and state courts.  What Spencer and Feldman found was that “[w]hen the moving party’s brief was more readable, the moving party was typically more likely to prevail[.]” Also, “[m]oving from cases where the moving party’s brief is significantly less readable than the non-moving party’s brief to the opposite situations, the likelihood that the moving party prevails on the motion for summary judgment more than doubles from 42% to 85%.”  Both findings appear consistent with their initial hypotheses.  The authors explain alternative theories for these results but ultimately dismiss those theories for the correlation they reached.

For lawyers and advocates, this study presents an important focus on effective and presentable writing in litigation.  However, although Spencer and Feldman’s study does not prove a causal relationship between readability of briefs and favorable case outcomes, the authors do call out that the strong correlation raised here does bolster “the ever-increasing emphasis on legal writing instruction in law school curricula, the ABA standards on law school accreditation, and continuing legal education programs.”  Thus, this study lends credibility for elevating the profile and status of legal writing colleagues in law schools across the country.

In reading Spencer and Feldman’s article, I was reminded of the old schoolhouse phrase, “neatness counts”—but here perhaps it’s “readability counts” that is more appropriate.  With readability highly influenced by the proficiency of legal writing, what this study eventually provokes in me as a doctrinal law faculty member can be crystallized into two thoughts.  First, I have a question: does readability correlate to final examination grading or am I as the grader of my final exams doing something else in the grading process (such as assessment) that is conceptually and functionally different from the adjudication process?  Secondly, if readability does correlate to exams (even if I am assessing competency rather than adjudicating cases), then knowing how to affect fluency and readability would be an intrinsic part of the art of lawyering, factoring into the choices and strategies a legal thinker makes in advocacy.  I would see that, other than teaching doctrine, imparting such skills would be part of my job as well.  Teaching it effectively would be another way to help my students engage with the law and help empower them.  Ultimately for me, it is this correlation, drawn from Spencer and Feldman’s study, that resonates most with me.  In this way, beyond “readability counts” for practitioners, their study is also very significant for the teaching of effective lawyering.

 

Institute for Law Teaching and Learning