The Stanford Legal Design Lab has made AI and Access to Justice a primary workstream for the coming years.
As more justice professionals & community members become aware of AI, our Lab is doing cutting-edge R&D work about how AI systems are performing when people try to use them for legal problem-solving — and how we can build smarter, more responsible AI for access to justice.
Explore the Legal Design Lab’s Work on AI & Access to Justice
This webpage is our main hub for this workstream. Choose from the categories here to find more about our specific initiatives on AI & Access to Justice.
If you are interested in AI & Access to Justice, sign up using this form to stay updated with work, opportunities and events in this space.
Projects on AI & Access to Justice
Along with understanding possible problems & regulation of AI — we are also excited to explore how AI models and tools might improve the justice system. This includes doing research, community design, and design/development of new AI that can responsibly help empower people regarding their legal rights — and empower service-providers to make the justice system more accessible and equitable.
The Legal Design Lab is working both on the infrastructure of responsible AI (gathering corpi of data, creating labeled datasets, establishing taxonomies, creating benchmarks, and finding pilot partners) as well as the design & development of new AI tech/service pilots.
Learned Hands game to label people’s legal issues
Learned Hands is an online game to build labeled datasets, machine learning models, and new applications that can connect people online to high-quality legal help. Our team at Legal Design Lab partnered with the team at Suffolk LIT Lab to build it, with the support of the Pew Charitable Trusts.
Playing the Learned Hands game lets you label people’s stories with a standardized list of legal issue codes. It’s a mobile-friendly web application that you’re welcome to come play and earn pro bono credit with.
The game produces a labeled dataset of people’s stories, tagged with the legal issues that apply to their situation. This dataset can be used to develop AI tools like classifiers to automatically spot people’s issues.
AI/Legal Help Problem Incident Database
We will be making this database available in the near-future, as we collect more records & review them.For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:
- incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
- hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
- irrelevant, distracting, or off-topic information;
- misrepresentation of the law;
- overly simplified information, that loses key nuance or cautions;
- otherwise doing something that might be harmful to a person trying to get legal help.
You can send in any incidents you’ve experienced here at this form.
Recent Posts on AI & Access to Justice
On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access….
The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by…
How can regulators, researchers, and tech companies proactively protect people’s rights & privacy, even as AI becomes more ubiquitous so quickly?…
What can justice professionals be working on, to make stronger relationships with technologists researching & developing AI platforms? How can legal…
As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about…
The Lab’s Margaret Hagan was a panelist at the May 2023 national event on AI & Access to Justice hosted by…
Courses on AI & Access to Justice
Our Lab team is teaching interdisciplinary courses at Stanford Law School and design school on how AI can be responsibly built to increase access to justice, or what limits might be put on it to protect people.
Please write to us if you are interested in taking a course, or being a partner on one.
Autumn-Winter 23-24 AI for Legal Help 809E
In Autumn-Winter quarters 2023-24, the Legal Design Lab team will offer the policy lab class “AI for Legal Help”.
It is a 3-credit course, with course code LAW 809E.We will be working with community groups & justice institutions to interview members of the public about if & how they would use AI platforms (like ChatGPT) to deal with legal problems like evictions, debt collection, or domestic violence.
Our class client is the Legal Services Corporation’s TIG (Technology Initiative Grant) team.
The goal of the class is to develop a community-centered agenda about how to make these AI platforms more effective at helping people with these problems, while also identifying the key risks they pose to people & technical/policy strategies to mitigate these risks.
The class will be taught with user interviews, testing sessions, and multi-stakeholder workshops at the core – to have students synthesize diverse points of view into an agenda that can make AI tools more equitable, accessible, and responsible in the legal domain.
Network Events on AI & Access to Justice
The Stanford Legal Design Lab has been convening a series of workshops among key stakeholders who can design and develop new AI efforts to help people with legal problems: legal aid lawyers, court staff, judges, computer science researchers, tech developers, and community members.
JURIX ’23 AI & Access to Justice academic workshop
In December 2023, our Lab team is co-hosting an academic workshop on AI & Access to Justice at the JURIX conference on Legal Knowledge and Information Systems.
There is an open call for submissions to the workshop. Submissions are due by November 12, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.
AI & Legal Help Crossover Workshop
In Summer 2023, an interdisciplinary group of researchers at Stanford hosted the “AI and Legal Help Crossover” event, for stakeholders from the civil justice system and computer science to meet, talk, and identify promising next steps to advance the responsible development of AI for improving the justice system.
Stanford-SRLN Spring 2023 brainstorm session
In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics. The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.
Read more about the session & the brainstorm of opportunities and risks.
Research on AI & Access to Justice
The Stanford Legal Design Lab has been researching what community members want from AI for justice problems, how AI systems perform on justice-related queries, and what opportunities there are to increase the quality of AI in helping people with their justice problems.
Paper on user interviews about AI & Legal Help
Margaret D. Hagan, “Towards Human-Centered Standards for Legal Help AI.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. Publication Forthcoming. Available at SSRN: https://ssrn.com/abstract=4582745
As more groups consider how AI may be used in the legal sector, this paper envisions how companies and policymakers can prioritize the perspective of community members as they design AI and policies around it. It presents findings of structured interviews and design sessions with community members, in which they were asked about whether, how, and why they would use AI tools powered by large language models to respond to legal problems like receiving an eviction notice. The respondents reviewed options for simple versus complex interfaces for AI tools, and expressed how they would want to engage with an AI tool to resolve a legal problem. These empirical findings provide directions that can counterbalance legal domain experts’ proposals about the public interest around AI, as expressed by attorneys, court officials, advocates, and regulators. By hearing directly from community members about how they want to use AI for civil justice tasks, what risks concern them, and the value they would find in different kinds of AI tools, this research can ensure that people’s points of view are understood and prioritized, rather than only domain experts’ assertions about people’s needs and preferences around legal help AI.