AI for Legal Help

In this series of classes, students will develop a community-centered agenda about how to make these AI platforms more effective at helping people with these problems, while also identifying the key risks they pose to people & technical/policy strategies to mitigate these risks.

In Autumn-Winter quarters 2023-24, the Legal Design Lab team will offer the policy lab class “AI for Legal Help”.

It is a 3-credit course, with course code LAW 809E.

We will be working with community groups & justice institutions to interview members of the public about if & how they would use AI platforms (like ChatGPT) to deal with legal problems like evictions, debt collection, or domestic violence.

Our class client is the Legal Services Corporation’s TIG (Technology Initiative Grant) team.

The goal of the class is to develop a community-centered agenda about how to make these AI platforms more effective at helping people with these problems, while also identifying the key risks they pose to people & technical/policy strategies to mitigate these risks.

The class will be taught with user interviews, testing sessions, and multi-stakeholder workshops at the core – to have students synthesize diverse points of view into an agenda that can make AI tools more equitable, accessible and responsible in the legal domain.

About the Class

Our class will conduct research, hold events, and do fieldwork in order to:

  • Learn if & how people will use AI platforms for legal & life problem-solving. 
    • Will they use them? What will they ask AI for? Will they trust what it says? What will they do with the responses the AI gives them?
    • What kinds of ‘user types’ can we identify, when it comes to using AI for legal help?
  • Find the errors, risks, and quality issues that tech companies & legal institutions should focus on mitigating.
    • Does the AI give them wrong or problematic info?
    • What are the quality standards & harm scenarios we can build into a measurement instrument?
  • Set an agenda for how AI platforms can better serve people with legal problems.
    • What tech & policy interventions can ensure risks of harm are mitigated? What will work with people’s needs and preferences?
    • How can AI platforms increase the quality of the responses they give when people ask about legal problems?
    • How can tech & legal institutions work together going forward, so that the platforms continue to be responsible, accountable, and empowering around legal & justice system use cases?
    • What datasets, AI models, benchmark standards, or other R&D pilots can advance AI for access to justice, while also ensuring consumer protection?

Students will synthesize what they learn into visual and engaging communications, and propose new strategies to improve AI platforms and oversight of them. Students will have a chance to build their skills in critical analysis, client-centered lawyering, visual communication, and policymaking. They will conduct user research, technology experiments, and legal research to identify how people use AI platforms, new ideas for improvements, and plans for new policy and technology initiatives.

How could AI help (or harm) access to justice?

Opportunities & Risks for AI, Legal Help, and Access to Justice, report by Margaret Hagan from Spring 2023 multi-stakeholder workshop co-hosted by Stanford Legal Design Lab & Self-Represented Litigation Network

Justice & AI Crossover development work report, by Margaret Hagan after July 2023 technical-legal crossover event to discuss possible projects

Guzman, H. (2023). AI’s “Hallucinations” Add to Risks of Widespread Adoption. Retrieved June 19, 2023, from https://www.law.com/corpcounsel/2023/03/23/ais-hallucinations-add-to-risks-of-widespread-adoption/?slreturn=20230519164801 

Granat, R. (2023). ChatGPT, Access to Justice, and UPL. Retrieved June 19, 2023, from https://www.lawproductmakers.com/2023/03/chatgtp-access-to-justice-and-upl/ 

Hagan, M. D. Towards Human-Centered Standards for Legal Help AI. Philosophical Transactions of the Royal Society A. (Forthcoming)

Holt, A. T. (2023). Legal AI-d to Your Service: Making Access to Justice a Reality. Vanderbilt Journal of Entertainment and Technology Law. Retrieved from https://www.vanderbilt.edu/jetlaw/2023/02/04/legal-ai-d-to-your-service-making-access-to-justice-a-reality/

Kanu, H. (2023, April). Artificial intelligence poised to hinder, not help, access to justice. Reuters. Retrieved from https://www.reuters.com/legal/transactional/artificial-intelligence-poised-hinder-not-help-access-justice-2023-04-25/ 

Perlman, A. (2023). The Implications of ChatGPT for Legal Services and Society. The Practice. Cambridge, MA. Retrieved from https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/ 

Poppe, E. T. (2019). The Future Is ̶B̶r̶i̶g̶h̶t̶ Complicated: AI, Apps & Access to Justice. Oklahoma Law Review, 72(1). Retrieved from https://digitalcommons.law.ou.edu/olr/vol72/iss1/8 

Simshaw, D. (2022). Access to A.I. Justice: Avoiding an Inequitable Two-Tiered System of Legal Services. Yale Journal of Law & Technology, 24, 150–226. https://yjolt.org/access-ai-justice-avoiding-inequitable-two-tiered-system-legal-services 

Stepka, M. (2022, February). Law Bots: How AI Is Reshaping the Legal Profession. ABA Business Law Today. Retrieved from https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/ 

Telang, A. (2023). The Promise and Peril of AI Legal Services to Equalize Justice. Harvard Journal of Law & Technology. Retrieved from https://jolt.law.harvard.edu/digest/the-promise-and-peril-of-ai-legal-services-to-equalize-justice 

Tripp, A., Chavan, A., & Pyle, J. (2018). Case Studies for Legal Services Community Principles and Guidelines for Due Process and Ethics in the Age of AI. Retrieved from https://docs.google.com/document/d/1rEvg5xuOs_o1njPHHpF9jtuaGi0ren6DYUElBu0Fkfk/edit 

Westermann, Hannes and Karim Benyekhlef. JusticeBot: A Methodology for Building Augmented Intelligence Tools for Laypeople to Increase Access to Justice. ICAIL 2023. https://arxiv.org/pdf/2308.02032.pdf , https://www.cyberjustice.ca/en/logiciels-cyberjustice/nos-solutions-logicielles/justicebot/ 

Wilkins, S. (2023, February). DoNotPay’s Downfall Put a Harsh Spotlight on AI and Justice Tech. Now What? Legaltech News. Retrieved from https://www.law.com/legaltechnews/2023/02/10/donotpays-downfall-put-a-harsh-spotlight-on-ai-and-justice-tech-now-what/ 

JusticeBot project at Cyberjustice Laboratory in Montreal

See the research paper: JusticeBot: A Methodology for Building Augmented Intelligence Tools for Laypeople to Increase Access to Justice by Hannes Westermann, Karim Benyekhlef

Laypeople (i.e. individuals without legal training) may often have trouble resolving their legal problems. In this work, we present the JusticeBot methodology. This methodology can be used to build legal decision support tools, that support laypeople in exploring their legal rights in certain situations, using a hybrid case-based and rule-based reasoning approach. The system ask the user questions regarding their situation and provides them with legal information, references to previous similar cases and possible next steps. This information could potentially help the user resolve their issue, e.g. by settling their case or enforcing their rights in court. We present the methodology for building such tools, which consists of discovering typically applied legal rules from legislation and case law, and encoding previous cases to support the user. We also present an interface to build tools using this methodology and a case study of the first deployed JusticeBot version, focused on landlord-tenant disputes, which has been used by thousands of individuals. 

Spot Classifier from Suffolk LIT Lab, a classifier that can analyze people’s descriptions of their life/legal problems, and identify specific legal issue codes.

Read more about the project: “Online Tool Will Help ‘Spot’ Legal Issues That People Face

Learned Hands machine learning labelling project, to build a labeled dataset of people’s legal problem stories.

See the LawNext write-up: Stanford and Suffolk Create Game to Help Drive Access to Justice

Karl Branting, The Justice Access Game: Crowd-Sourced Evaluation of Systems for Pro Se Litigants, 3435 in CEUR Workshop Proceedings (2023), https://ceur-ws.org/Vol-3435/short5.pdf

Karl Branting & Sarah McLeod, Narrative-Driven Case Elicitation, 3435 in CEUR Workshop Proceedings (2023), https://ceur-ws.org/Vol-3435/short1.pdf

ChatGPT as an Artificial Lawyer

Jinzhe Tan, Hannes Westermann & Karim Benyekhlef, ChatGPT as an Artificial Lawyer?, 3435 in CEUR Workshop Proceedings (2023), https://ceur-ws.org/Vol-3435/short2.pdf

Abstract: Lawyers can analyze and understand specific situations of their clients to provide them with relevant legal information and advice. We qualitatively investigate to which extent ChatGPT (a large language model developed by OpenAI) may be able to carry out some of these tasks, to provide legal information to laypeople. This paper proposes a framework for evaluating the provision of legal information as a process, evaluating not only its accuracy in providing legal information, but also its ability to understand and reason about users’ needs. We perform an initial investigation of ChatGPT’s ability to provide legal information using several simulated cases. We also compare the performance to that of JusticeBot, a legal information tool based on expert systems. While ChatGPT does not always provide accurate and reliable information, it acts as a powerful and intuitive way to interact with laypeople. This research opens the door to combining the two approaches for flexible and accurate legal information tools.

Access to AI Justice: Avoiding an Inequitable Two-Tiered System of Legal Services by Drew Simshaw, Yale Journal of Law & Technology, 2022

Artificial intelligence (AI) has been heralded for its potential to help close the access to justice gap. It can increase efficiencies, democratize access to legal information, and help consumers solve their own legal problems or connect them with licensed professionals who can. But some fear that increased reliance on AI will lead to one or more two-tiered systems: the poor might be stuck with inferior AI-driven assistance; only expensive law firms might be able to effectively harness legal AI; or, AI’s impact might not disrupt the status quo where only some can afford any type of legal assistance. The realization of any of these two-tiered systems would risk widening the justice gap. But the current regulation of legal services fails to account for the practical barriers preventing effective design of legal AI across the landscape, which make each of these two-tiered systems more likely.

Therefore, this Article argues that jurisdictions should embrace certain emerging regulatory reforms because they would facilitate equitable and meaningful access to legal AI across the legal problem-solving landscape, including by increasing competition and opportunities for collaboration across the legal services and technology industries. The Article provides a framework that demonstrates how this collaboration of legal and technical expertise will help stakeholders design and deploy AI-driven tools and services that are carefully calibrated to account for the specific consumers, legal issues, and underlying processes in each case. The framework also demonstrates how collaboration is critical for many stakeholders who face barriers to accessing and designing legal-AI due to insufficient resources, resilience, and relationships. The Article then advocates for regulatory priorities, reforms, and mechanisms to help stakeholders overcome these barriers and help foster legal AI access across the landscape. 

AI, Pursuit of Justice & Questions Lawyers Should Ask in Bloomberg Law, by Julia Brickell, Columbia University, Jeanna Matthews, Clarkson University, Denia Psarrou, University of Athens & Shelley Podolny, Columbia University

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021 – Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Bickmore, T. W., Trinh, H., Olafsson, S., O’Leary, T. K., Asadi, R., Rickles, N. M., & Cruz, R. (2018). Patient and consumer safety risks when using conversational assistants for medical information: An observational study of siri, alexa, and google assistant. Journal of Medical Internet Research, 20(9). https://doi.org/10.2196/11510

Neel Guha et al., LegalBench: Prototyping a Collaborative Benchmark for Legal Reasonging, (2022), https://arxiv.org/abs/2209.06120 .

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P. Sen, Mellor, J., … Gabriel, I. (2022). Taxonomy of Risks posed by Language Models. In ACM International Conference Proceeding Series (Vol. 22, pp. 214–229). ACM. https://doi.org/10.1145/3531146.3533088

How do people use AI tools for problem-solving?

The User Experience of ChatGPT: Findings from a Questionnaire Study of Early Users, July 2023 12 page study by Marita Skjuve, Asbjorn Folstad, and Petter Bae Brandtzaeg

Should My Agent Lie for Me? A Study on Attitudes of US-based Participants Towards Deceptive AI in Selected Future-of-work

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Emily M. Bender, Timni Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, March 2021

Abstract: The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.

Deceptive AI Ecosystems: The Case of ChatGPT, July 2023, 6-page Extended Abstract from Xiao Zhan, Yifan Xu, Stefan Sarkadi

Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant 2018 Journal of Medical Internet Research.

Abstract: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood.

Objective:To determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information.

Methods:Participants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale.

Results: Fifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one “a few times.“ Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death.

Conclusions: Reliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider.

Doctor GPT-3: Hype or Reality? report from medical tech company Nabla about their attempt to build an AI-powered application to help people with medical issues.

Read a write-up at The Register, “Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves”

Ali Borji, A Categorical Archive of ChatGPT Failures, 1 (2023), http://arxiv.org/abs/2302.03494

Human-Centered Artificial Intelligence (HAI), Artificial Intelligence Index Report 2023, Human-centered artificial intelligence (2023), https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf