In Autumn-Winter quarters 2023-24, the Legal Design Lab team will offer the policy lab class “AI for Legal Help”.

It is a 3-credit course, with course code LAW 809E.

We will be working with community groups & justice institutions to interview members of the public about if & how they would use AI platforms (like ChatGPT) to deal with legal problems like evictions, debt collection, or domestic violence.

The goal of the class is to develop a community-centered agenda about how to make these AI platforms more effective at helping people with these problems, while also identifying the key risks they pose to people & technical/policy strategies to mitigate these risks.

The class will be taught with user interviews, testing sessions, and multi-stakeholder workshops at the core – to have students synthesize diverse points of view into an agenda that can make AI tools more equitable, accessible and responsible in the legal domain.

How could AI help (or harm) access to justice?

Opportunities & Risks for AI, Legal Help, and Access to Justice, report by Margaret Hagan from Spring 2023 multi-stakeholder workshop co-hosted by Stanford Legal Design Lab & Self-Represented Litigation Network

Justice & AI Crossover development work report, by Margaret Hagan after July 2023 technical-legal crossover event to discuss possible projects

JusticeBot project at Cyberjustice Laboratory in Montreal

See the research paper: JusticeBot: A Methodology for Building Augmented Intelligence Tools for Laypeople to Increase Access to Justice by Hannes Westermann, Karim Benyekhlef

Laypeople (i.e. individuals without legal training) may often have trouble resolving their legal problems. In this work, we present the JusticeBot methodology. This methodology can be used to build legal decision support tools, that support laypeople in exploring their legal rights in certain situations, using a hybrid case-based and rule-based reasoning approach. The system ask the user questions regarding their situation and provides them with legal information, references to previous similar cases and possible next steps. This information could potentially help the user resolve their issue, e.g. by settling their case or enforcing their rights in court. We present the methodology for building such tools, which consists of discovering typically applied legal rules from legislation and case law, and encoding previous cases to support the user. We also present an interface to build tools using this methodology and a case study of the first deployed JusticeBot version, focused on landlord-tenant disputes, which has been used by thousands of individuals. 

Spot Classifier from Suffolk LIT Lab, a classifier that can analyze people’s descriptions of their life/legal problems, and identify specific legal issue codes.

Read more about the project: “Online Tool Will Help ‘Spot’ Legal Issues That People Face

Learned Hands machine learning labelling project, to build a labeled dataset of people’s legal problem stories.

See the LawNext write-up: Stanford and Suffolk Create Game to Help Drive Access to Justice

Access to AI Justice: Avoiding an Inequitable Two-Tiered System of Legal Services by Drew Simshaw, Yale Journal of Law & Technology, 2022

Artificial intelligence (AI) has been heralded for its potential to help close the access to justice gap. It can increase efficiencies, democratize access to legal information, and help consumers solve their own legal problems or connect them with licensed professionals who can. But some fear that increased reliance on AI will lead to one or more two-tiered systems: the poor might be stuck with inferior AI-driven assistance; only expensive law firms might be able to effectively harness legal AI; or, AI’s impact might not disrupt the status quo where only some can afford any type of legal assistance. The realization of any of these two-tiered systems would risk widening the justice gap. But the current regulation of legal services fails to account for the practical barriers preventing effective design of legal AI across the landscape, which make each of these two-tiered systems more likely.

Therefore, this Article argues that jurisdictions should embrace certain emerging regulatory reforms because they would facilitate equitable and meaningful access to legal AI across the legal problem-solving landscape, including by increasing competition and opportunities for collaboration across the legal services and technology industries. The Article provides a framework that demonstrates how this collaboration of legal and technical expertise will help stakeholders design and deploy AI-driven tools and services that are carefully calibrated to account for the specific consumers, legal issues, and underlying processes in each case. The framework also demonstrates how collaboration is critical for many stakeholders who face barriers to accessing and designing legal-AI due to insufficient resources, resilience, and relationships. The Article then advocates for regulatory priorities, reforms, and mechanisms to help stakeholders overcome these barriers and help foster legal AI access across the landscape. 

AI, Pursuit of Justice & Questions Lawyers Should Ask in Bloomberg Law, by Julia Brickell, Columbia University, Jeanna Matthews, Clarkson University, Denia Psarrou, University of Athens & Shelley Podolny, Columbia University

How do people use AI tools for problem-solving?

The User Experience of ChatGPT: Findings from a Questionnaire Study of Early Users, July 2023 12 page study by Marita Skjuve, Asbjorn Folstad, and Petter Bae Brandtzaeg

Should My Agent Lie for Me? A Study on Attitudes of US-based Participants Towards Deceptive AI in Selected Future-of-work

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Emily M. Bender, Timni Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, March 2021

Abstract: The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.

Deceptive AI Ecosystems: The Case of ChatGPT, July 2023, 6-page Extended Abstract from Xiao Zhan, Yifan Xu, Stefan Sarkadi

Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant 2018 Journal of Medical Internet Research.

Abstract: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood.

Objective:To determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information.

Methods:Participants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale.

Results: Fifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one “a few times.“ Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death.

Conclusions:Reliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider.

Doctor GPT-3: Hype or Reality? report from medical tech company Nabla about their attempt to build an AI-powered application to help people with medical issues.

Read a write-up at The Register, “Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves”

MargaretAI for Legal Help