Our team at Legal Design Lab is building a national network of people working on AI projects to close the justice gap, through better legal services & information.
We’re looking to find more people working on innovative new ideas & pilots. Please share with us below using the form.
The idea could be for:
A new AI tool or agent, to help you do a specific legal task
A new or finetuned AI model for use in the legal domain
A benchmark or evaluation protocol to measure the performance of AI
A policy or regulation strategy to protect people from AI harms and encourage responsible innovation
A collaboration or network initiative to build a stronger ecosystem of people working on AI & justice
Another idea you have to improve the development, performance & consumer safety of AI in legal services.
This October, Stanford Legal Design Lab hosted the first AI + Access to Justice Summit. This invite-only event focused on building a national ecosystem of innovators, regulators, and supporters to guide AI innovation toward closing the justice gap, while also protecting the public.
The Summit’s flow aimed to teach frontline providers, regulators, and philanthropists about current projects, tools, and protocols to develop impactful justice AI. We did this with hands-on trainings on AI tools, platforms, and privacy/efficiency strategies. We layered on tours of what’s happening with legal aid and court help pilots, and what regulators and foundations are seeing with AI activity by lawyers and the public.
We then moved from review and learning to creative work. We workshoped how to launch new individual model & agent pilots, while weaving a coordinated network with shared infrastructure, models, benchmarks, and protocols. We closed the day with discussion about support — how to mobilize the financial resources, interdisciplinary relationships, and affordable technology access.
Our goal was to launch a coordinated, inspired, strategic cohort, working together across the country to set out a common, ambitious vision. We are so thankful that so many speakers, supporters, and participants joined us to launch this network & lay the groundwork for great work yet to come.
For our upcoming AI+Access to Justice Summit and our AI for Legal Help class, our team has made a new design workbook to guide people through scoping a new AI pilot.
We encourage others to use and explore this AI Design Workbook to help think through:
Use Cases and Workflows
Specific Legal Tasks that AI could do (or should not do)
User Personas, and how they might need or worry about AI — or how they might be affected by it
Data plans for training AI and for deploying it
Risks, laws, ethics brainstorming about what could go wrong or what regulators might require, and mitigation/prevention plans to proactively deal with these concerns
Quality and Efficiency Benchmarks to aim for with a new intervention (and how to compare the tech with the human service)
Support needed to go into the next phases, of tech prototyping and pilot deployment
Responsible AI development should be going through these 3 careful stages — design and policy research, tech prototyping and benchmark evaluation, and piloting in a controlled, careful way. We hope this workbook can be useful to groups who want to get started on this journey!
Building on last year’s very successful academic workshop on AI & Access to Justice at Jurix ’23 in the Netherlands, this year we are pleased to announce a new workshop at Jurix ’24 in Czechia.
Margaret Hagan of the Stanford Legal Design Lab is co-leading an academic workshop at the legal technology conference Jurix, on AI for Access to Justice. Quinten Steenhuis from Suffolk LIT Lab and Hannes Westermann of Maastricht University Faculty of Law will co-lead the workshop.
We invite legal technologists, researchers, and practitioners to join us in Brno, Czechia on December 11th for a full-day, hybrid workshop on innovations in AI for helping close the access to justice gap: the majority of legal problems that go unsolved around the world because potential litigants lack the time, money, or ability to participate in court processes to solve their problems.
The workshop will be a hybrid event. Workshop participants will be able to participate in-person or remotely via Zoom, although we hope for broad in-person participation. Depending on interest, a selection preference may be given for in-person participation.
The workshop will feature short paper presentations (likely 10 minutes), demos, and if possible, interactive exercises that invite attendees to participate in helping design and solve approaches to closing the access to justice gap with the help of AI.
Like last year, it will be a full-day workshop.
We invite contributors to submit:
short papers (5-10 pages), or
proposals for demos or interactive workshop exercises
We welcome works in progress, although depending on interest, we will give a preference to complete ideas that can be evaluated, shared and discussed.
The focus of submissions should be on AI tools, datasets, and approaches, whether large language models, traditional machine learning, or rules based systems, that solve the real world problems of unrepresented litigants or legal aid programs. Papers discussing the ethical implications, limits, and policy implications of AI in law are also welcome.
Other topics may include:
findings of research about how AI is affecting access to justice,
evaluation of AI models and tools intended to benefit access to justice,
outcomes of new interventions intended to deploy AI for access to justice,
proposals of future work to use AI or hold AI initiatives accountable,
principles & frameworks to guide work in this area, or
other topics related to AI & access to justice
Papers should follow the formatting instructions of CEUR-WS.
Submissions will be subject to peer review with an aim to possible publication as a workshop proceeding. Submissions will be evaluated on overall quality, technical depth, relevance, and the diversity of topics to ensure an engaging and high quality workshop.
Important dates
We invite all submissions to be made no later than November 11th, 2024.
We anticipate making decisions by November 22, 2024.
Authors are encouraged to submit an abstract even before making a final submission. You can revise your submission until the deadline of November 11th.
More about Jurix
The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.
This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Brno, Czechia. It will take place on December 11-13, 2024.
This week, Margaret Hagan presented at the Trust and Safety Research Conference, that brings together academics, tech professionals, regulators, nonprofits, and philanthropies to work on making the Internet a more safe, user-friendly place.
Margaret presented interim results of the Lab’s expert and user studies of AI’s performance at answering everyday legal questions, like around evictions and other landlord-tenant problems.
Some of the topics for discussion in the audience and panel on the Future of Search:
How can regulators, frontline domain experts (like legal aid lawyers and court professionals), and tech companies better work together to spot harmful content, set tailored policies, and ensure better outcomes for users?
Should tech companies’ and governments’ policies towards “what is the best way/amount” information for a user differ in different domains? Like perhaps for legal help queries, is it better to encourage more straightforward, simple, directive & authoritative info — or more complex, detailed information that encourages more curiosity and exploration?
How do we more proactively spot the harms and risks that might come from new & novel tech systems, that might be quite different than previous search engines or other tech systems?
How can we hold tech companies accountable to make more accurate tech systems, without chilling them out of a certain domain (like legal or health), where they don’t want to provide any substantial information for fear of liability?
This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.
Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.
Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.
Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment.
We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.
Stay tuned for the results of our expert-led rating games and feedback sessions!
If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.
Our team is excited to announce the new, 2024-25 version of our ongoing class, AI for Legal Help. This school year, we’re moving from background user and expert research towards AI R&D and pilot development.
Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate?
In this course, students will design and develop new demonstration AI projects and pilot plans, combining human-centered design, tech & data work, and law & policy knowledge.
Students will work on interdisciplinary teams, each partnered with frontline legal aid and court groups interested in using AI to improve their public services. Student teams will help their partners scope specific AI projects, spot and mitigate risks, train a model, test its performance, and think through a plan to safely pilot the AI.
By the end of the class, students and their partners will co-design new tech pilots to help people dealing with legal problems like evictions, reentry from the criminal justice system, debt collection, and more.
Students will get experience in human-centered AI development, and critical thinking about if and how technology projects can be used in helping the public with a high-stakes legal problem. Along with their AI pilot, teams will establish important guidelines to ensure that new AI projects are centered on the needs of people, and developed with a careful eye towards ethical and legal principles.
The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.
At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.
We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.
Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.
At the April 2024 Stanford Codex FutureLaw Conference, our team at Legal Design Lab both presented the research findings about users’ and subject matter experts’ approaches to AI for legal help, and to lead a half-day interdisciplinary workshop on what future directions are possible in this space.
Many of the audience members in both sessions were technologists interested in the legal space, who are not necessarily familiar with the problems and opportunities for legal aid groups, courts, and people with civil legal problems. Our goal was to help them understand the “access to justice” space and spot opportunities to which their development & research work could relate.
Some of the ideas that emerged in our hands-on workshop included the following possible AI + A2J innovations:
AI to Scan Scary Legal Documents
Several groups identified that AI could help a person, who has received an intimidating legal document — a notice, a rap sheet, an immigration letter, a summons and complaint, a judge’s order, a discovery request, etc. AI could let them take a picture of the document, synthesize the information, present it back with a summary of what it’s about, what important action items are, and how to get started on dealing with it.
It could make this document interactive through FAQs, service referrals, or a chatbot that lets a person understand and respond to it. It could help people take action on these important but off-putting documents, rather than avoid them.
Using AI for Better Gatekeeping of Eviction Notices & Lawsuits
One group proposed that a future AI-powered system could screen possible eviction notices or lawsuit filings, to check if the landlord or property manager has fulfilled all obligations and m
Landlords must upload notices.
AI tools review the notice: is it valid? have they done all they can to comply with legal and policy requirements? is there any chance to promote cooperative dispute resolution at this early stage?
If the AI lives at the court clerk level, it might help court staff better detect errors, deficiencies, and other problems that better help them allocate limited human review.
AI to empower people without lawyers to respond to a lawsuit
In addition, AI could help the respondent (tenant) prepare their side, helping them to present evidence, prep court documents, understand court hearing expectations, and draft letters or forms to send.
Future AI tools could help them understand their case, make decisions, and get work product created with little burden.
With a topic like child support modification, AI could help a person negotiate a resolution with the other party, or do a trial run to see how a possible negotiation might go. It could also change their tone, to take a highly emotional negotiation request and transform it to be more likely to get a positive, cooperative reply from the other party.
AI to make Legal Help Info More Accessible
Another group proposed that AI could be integrated into legal aid, law library, and court help centers to:
Better create and maintain inter-organization referrals, so there are warm handoffs and not confusing roundabouts when people seek help
Clearer, better maintained, more organized websites for a jurisdiction, with the best quality resources curated and staged for easy navigation
Multi-modal presentations, to make information available in different visual presentations and languages
Providing more information in speech-to-text format, conversational chats, and across different dialects. This was especially highlighted in immigration legal services.
AI to upskill students & pro bono clinics
Several groups talked about AI for training and providing expert guidance to staff, law students, and pro bono volunteers to improve their capacity to serve members of the public.
AI tools could be used in simulations to better educate people in a new legal practice area, and also supplement their knowledge when providing services. Expert practitioners can supply knowledge to the tools, that can then be used by novice practitioners so that they can provide higher-quality services more efficiently in pro bono or law student clinics.
AI could also be used in community centers or other places where community justice workers operate, to get higher quality legal help to people who don’t have access to lawyers or who do not want to use lawyers.
AI to improve legal aid lawyers’ capacity
Several groups proposed AI that could be used behind-the-scenes by expert legal aid or court help lawyers. They could use AI to automate, draft, or speed up the work that they’re already doing. This could include:
Improving intake, screening, routing, and summaries of possible incoming cases
Drafting first versions of briefs, forms, affidavits, requests, motions, and other legal writing
Documenting their entire workflow & finding where AI can fit in.
Cross-Cutting action items for AI+ A2J
Across the many conversations, some common tasks emerged that cross different stakeholders and topics.
Reliable AI Benchmarks:
We as a justice community need to establish solid benchmarks to test AI effectiveness. We can use these benchmarks to focus on relevant metrics.
In addition, we need to regularly report on and track AI performance at different A2J tasks.
This can help us create feedback loops for continuous improvement.
Data Handling and Feedback:
The community needs reliable strategies and rules for how to do AI work that respects obligations for confidentiality and privacy.
Can there be more synthetic datasets that still represent what’s happening in legal aid and court practice, so they don’t need to share actual client information to train models?
Can there be better Personally Identifiable Information (PII) redaction for data sharing?
Who can offer guidance on what kinds of data practices are ethical and responsible?
Low-Code AI Systems:
The justice community is never going to have large tech, data, or AI working groups within their legal aid or court organization. They are going to need low-code solutions that will let them deploy AI systems, fine-tune them, and maintain them without a huge technical requirement.
Overall, the presentation, Q&A, and workshop all pointed to enthusiasm for responsible innovation in the AI+A2J space. Tech developers, legal experts, and strategists are excited about the opportunity to improve access to justice through AI-driven solutions, and enhance efficiency and effectiveness in legal aid. With more brainstormed ideas for solutions in this space, now it is time to move towards R&D incubation that can help us understand what is feasible and valuable in practice.