The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.
At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.
We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.
Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.
When you explore the online legal community space, you’ll notice that the topic of AI and Access to Justice (A2J) comes up quite frequently. Most agree that AI & A2J is a consequential topic. However, even though the importance of the topic is recognized, the phrase itself remains nebulous. What do we mean by AI & A2J? What kind of experiments are researchers and practitioners conducting? How are new tools and projects evaluated?
To delve into these questions, the Stanford Legal Design Lab recently initiated a webinar series. Each month, a presenter is invited to discuss a study or project in the AI & A2J space. Attendees can learn about new AI & A2J projects, ask questions, make connections, and find new ideas or protocols for their own work. The ultimate goal is to build more collaborations between researchers, service providers, technologists and policymakers.
The inaugural presenter for the webinar was Hannes Westermann. Hannes is an Assistant Professor in Law & Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab. His current research focuses on using generative AI to enhance access to justice in a safe and practical manner.
Generative AI for Access to Justice
Generative AI can be a valuable addition in the access to justice space. Laypeople often have difficulty resolving their everyday legal issues, from debt to employment problems. This struggle is partly due to the complexity of the law. It is challenging for people to understand how laws apply to them and what actions they should take regarding their legal issues, assuming they can identify they have a legal issue in the first place.
Moreover, the cost of going to court can be high, not just financially but also in terms of time and emotional stress. This is particularly true if individuals are unaware of how the process works and how they should interact with the court system.
Generative AI could help alleviate some of these issues and create more opportunities for users to interact with the legal system in a user-centered way.
Hannes spotlighted three projects during his presentation that address these issues.
Justice Bot: Aiding Landlord-Tenant Disputes
The JusticeBot project, developed at the Cyberjustice Laboratory in Montreal, provides legal information to laypeople. The first version, available at https://justicebot.ca (in French) addresses landlord-tenant disputes. The bot asks users questions about their landlord-tenant issues and provides legal information based on their responses. Users start by indicating whether they are a landlord or tenant. Based on their choice, the system presents a series of questions tailored to common issues in that category, such as unpaid rent or early lease termination. The system guides users through these questions, providing explanations and references to relevant legal texts.
For instance, if a landlord indicates that their tenant is frequently late with rent payments, the JusticeBot asks follow-up questions to determine the frequency and impact of these late payments. It then provides information on the landlord’s legal rights and possible next steps, such as terminating the lease or seeking compensation.
The team at the Cyberjustice Laboratory collaborated with the Tribunal administratif du logement (TAL) in developing and marketing the JusticeBot. The TAL receives over 70,000 claims and over a million calls annually. By automating the initial information-gathering process, the JusticeBot could potentially alleviate some of this demand, allowing users to resolve issues without immediate legal intervention. So far, the JusticeBot has been used over 35k times.
The first iteration of the bot was built like a logic tree, where there was a logical connection between the questions and answers, making it possible to verify the accuracy of the legal information. In recent years, Westermann and his team have experimented with integrating language models such as GPT-4 into the JusticeBot (see here and here). This hybrid approach could ensure the accuracy of the information while enhancing the human-centered interface of the bot, and increase the efficiency of creating new bots.
DALLMA: Document Assistance
The next project Hannes discussed is DALLMA. DALLMA stands for Document Automation, Large Language Model Assisted. This early-stage project aims to automate the drafting of legal documents using large language models. The current version focuses on forms, as people often find them complicated to fill out. AI is utilized to fill in structured information into legal documents. Users provide basic information and context, and the AI assists in structuring and populating the document with relevant legal content. In the future, this could increase efficiency in drafting forms and other legal documents.
LLMediator: Enhancing Online Dispute Resolution
The LLMediator explores the use of large language models in online dispute resolution (ODR). The LLMediator makes suggestions on how to phrase communications more amicably during disputes. It analyzes the content and sentiment of the message to prevent escalation and promote resolution. For example, if the LLMediator detects aggressive language that could escalate the conflict, the AI might suggest more constructive phrasing. It can also suggest a potential intervention message to a human mediator, supporting them in their work of encouraging a friendly resolution to the dispute. In short, it acts as a virtual assistant that supports mediators and parties by providing suggestions while still allowing the user to make the final decision.
The Challenges of AI
The projects presented by Hannes show the promise of integrating LLM into the A2J space. However, it is important to be aware of the challenges involved in integrating such instruments into the legal system. One issue is hallucinations. This occurs when the AI generates plausible-sounding but incorrect information, which can be an issue in the legal domain. Hannes explains that this happens because the AI predicts the most probable continuation of a phrase based on its training data, but it does not guarantee accuracy. More research needs to be done to find ways to mitigate these issues. One potential solution is the conceptualization of systems as “augmented intelligence”, as demonstrated e.g. in the LLMediator project. In this approach, the AI system does not provide predictions or recommendations to the user. Rather, it provides information or suggestions that can help the users make better decisions or accomplish tasks more efficiently.
Another potential solution would be to combine AI systems with transparent, logical reasoning systems, as shown e.g. in DALLMA. This approach has the potential to combine the power of large language models with legal expert knowledge, to ensure that users receive accurate legal information. This approach could also help tackle biases that may be present in the training datasets of AI models.
Privacy is another concern, especially in the legal field, which deals with large amounts of sensitive and confidential information. This data can be sent to external servers when using large language models. However, Hannes notes that recent developments in AI technology have led to powerful local AI models that offer more privacy protections. AI providers could also offer contractual guarantees of data protection.
To make sure that AI is implemented in a safe and practical manner in the legal system, it is important to keep these and other challenges in mind. Potential ways of mitigating the challenges include technical innovations and evaluations, regulatory and ethical considerations, guidelines for use of AI in legal contexts, and education for users about the limitations of AI and the importance of verifying the information received through AI models.
Future direction
Hannes concludes his presentation by stating that generative AI should be viewed as a powerful tool that augments human intelligence. The analogy he uses is that of the hammer and carpenter.
“Will law be replaced by AI is a bit like asking: ‘Will the carpenter be replaced by the hammer?’ It just kind of doesn’t make sense as a question, as the hammer is a tool used by the carpenter and not as a replacement for them.”
AI is a powerful tool that can be a useful addition to use cases in the access to justice space. More research needs to be done to better understand the use cases and evaluate the tools. Hannes hopes that the community will engage with the systems and understand what they have to offer so that we can leverage AI to increase access to justice in a safe way.
At the April 2024 Stanford Codex FutureLaw Conference, our team at Legal Design Lab both presented the research findings about users’ and subject matter experts’ approaches to AI for legal help, and to lead a half-day interdisciplinary workshop on what future directions are possible in this space.
Many of the audience members in both sessions were technologists interested in the legal space, who are not necessarily familiar with the problems and opportunities for legal aid groups, courts, and people with civil legal problems. Our goal was to help them understand the “access to justice” space and spot opportunities to which their development & research work could relate.
Some of the ideas that emerged in our hands-on workshop included the following possible AI + A2J innovations:
AI to Scan Scary Legal Documents
Several groups identified that AI could help a person, who has received an intimidating legal document — a notice, a rap sheet, an immigration letter, a summons and complaint, a judge’s order, a discovery request, etc. AI could let them take a picture of the document, synthesize the information, present it back with a summary of what it’s about, what important action items are, and how to get started on dealing with it.
It could make this document interactive through FAQs, service referrals, or a chatbot that lets a person understand and respond to it. It could help people take action on these important but off-putting documents, rather than avoid them.
Using AI for Better Gatekeeping of Eviction Notices & Lawsuits
One group proposed that a future AI-powered system could screen possible eviction notices or lawsuit filings, to check if the landlord or property manager has fulfilled all obligations and m
Landlords must upload notices.
AI tools review the notice: is it valid? have they done all they can to comply with legal and policy requirements? is there any chance to promote cooperative dispute resolution at this early stage?
If the AI lives at the court clerk level, it might help court staff better detect errors, deficiencies, and other problems that better help them allocate limited human review.
AI to empower people without lawyers to respond to a lawsuit
In addition, AI could help the respondent (tenant) prepare their side, helping them to present evidence, prep court documents, understand court hearing expectations, and draft letters or forms to send.
Future AI tools could help them understand their case, make decisions, and get work product created with little burden.
With a topic like child support modification, AI could help a person negotiate a resolution with the other party, or do a trial run to see how a possible negotiation might go. It could also change their tone, to take a highly emotional negotiation request and transform it to be more likely to get a positive, cooperative reply from the other party.
AI to make Legal Help Info More Accessible
Another group proposed that AI could be integrated into legal aid, law library, and court help centers to:
Better create and maintain inter-organization referrals, so there are warm handoffs and not confusing roundabouts when people seek help
Clearer, better maintained, more organized websites for a jurisdiction, with the best quality resources curated and staged for easy navigation
Multi-modal presentations, to make information available in different visual presentations and languages
Providing more information in speech-to-text format, conversational chats, and across different dialects. This was especially highlighted in immigration legal services.
AI to upskill students & pro bono clinics
Several groups talked about AI for training and providing expert guidance to staff, law students, and pro bono volunteers to improve their capacity to serve members of the public.
AI tools could be used in simulations to better educate people in a new legal practice area, and also supplement their knowledge when providing services. Expert practitioners can supply knowledge to the tools, that can then be used by novice practitioners so that they can provide higher-quality services more efficiently in pro bono or law student clinics.
AI could also be used in community centers or other places where community justice workers operate, to get higher quality legal help to people who don’t have access to lawyers or who do not want to use lawyers.
AI to improve legal aid lawyers’ capacity
Several groups proposed AI that could be used behind-the-scenes by expert legal aid or court help lawyers. They could use AI to automate, draft, or speed up the work that they’re already doing. This could include:
Improving intake, screening, routing, and summaries of possible incoming cases
Drafting first versions of briefs, forms, affidavits, requests, motions, and other legal writing
Documenting their entire workflow & finding where AI can fit in.
Cross-Cutting action items for AI+ A2J
Across the many conversations, some common tasks emerged that cross different stakeholders and topics.
Reliable AI Benchmarks:
We as a justice community need to establish solid benchmarks to test AI effectiveness. We can use these benchmarks to focus on relevant metrics.
In addition, we need to regularly report on and track AI performance at different A2J tasks.
This can help us create feedback loops for continuous improvement.
Data Handling and Feedback:
The community needs reliable strategies and rules for how to do AI work that respects obligations for confidentiality and privacy.
Can there be more synthetic datasets that still represent what’s happening in legal aid and court practice, so they don’t need to share actual client information to train models?
Can there be better Personally Identifiable Information (PII) redaction for data sharing?
Who can offer guidance on what kinds of data practices are ethical and responsible?
Low-Code AI Systems:
The justice community is never going to have large tech, data, or AI working groups within their legal aid or court organization. They are going to need low-code solutions that will let them deploy AI systems, fine-tune them, and maintain them without a huge technical requirement.
Overall, the presentation, Q&A, and workshop all pointed to enthusiasm for responsible innovation in the AI+A2J space. Tech developers, legal experts, and strategists are excited about the opportunity to improve access to justice through AI-driven solutions, and enhance efficiency and effectiveness in legal aid. With more brainstormed ideas for solutions in this space, now it is time to move towards R&D incubation that can help us understand what is feasible and valuable in practice.
The article summarizes the Legal Design Lab’s work, partnerships & human-centered design approach to tackle legal challenges & develop new technologies.
The article covers our recent user & legal help provider research, our initial phase of groundwork research, and our new phase of R&D to see if we can develop legal AI solutions in partnership with frontline providers.
Finally, the article touches on our research on quality metrics & our upcoming AI platform audit.
In mid-April, Margaret Hagan presented on the Lab’s research and development efforts around AI and access to justice at the Legal Services Corporation 50th anniversary forum. This large gathering of legal aid executive directors, national justice leaders, members of Congress, philanthropists, and corporate leaders celebrated the work of LSC & profiled future directions of legal services.
Margaret was on a panel along with legal aid leader Sateesh Nori, Suffolk Law School Dean Andy Perlman, and former LSC president James Sandman.
She presented 3 big takeaways for the audience, about how to approach if and how AI should be used to close on the justice gap — especially to move beyond gut reactions & anecdotes that tend towards too much optimism or skepticism. Based on the lab’s research and design activities she proposed 3 big shifts for civil justice leaders towards generative AI.
Shift 1: Towards Techno-Realism
This shift away from hardline camps about too much optimism or pessimism about AI’s potential futures can lead us to more empirical, detailed work. Where are the specific tasks where AI can be helpful? Can we demonstrate with lab studies and controlled pilots exactly if AI can perform better than humans at these specific tasks — with equal or higher quality, and efficiency? This move towards applied research can lead towards more responsible innovation, rather than rushing towards AI applications too quickly or chilling the innovation space pre-emptively.
Shift 2: From Reactive to Proactive leadership
The second shift is how lawyers and justice professionals approach the world of AI. Will they be reactive to what technologists put out to the public, trying to create the right mix of norms, lawsuits, and regulations that can try to push AI towards being safe enough, and also quality enough for legal use cases?
Instead, they can be proactive. They can be running R&D cohorts to see what AI is good at, what risks and harms emerge in these test applications, and then work with AI companies and regulators to better encourage the AI strengths and mitigate its risks. This means joining together with technologists (especially those at universities and benefit corporations) to do hands-on, exploratory demonstration project development to better inform investments, regulation, and other policy-making on AI for justice use cases.
Shift 3: Local Pilots to Coordinated Network
The final shift is about how innovators work. Legal aid groups or court staff could launch AI pilots on their own, building out a new application or bot for their local jurisdiction, and then share it at upcoming conferences to let others know about it. Or, from the beginning, they could be crafting their technical system, UX design, vendor relationships, data management, and safety evaluations in concert with others around the country who are working on similar efforts. Even if the ultimate application is run and managed locally, much of the infrastructure can be shared in national cohorts. These national cohorts can also help gather data, experiences, risk/harm incidents, and other important information that can help guide task forces, attorneys general, tech companies, and others setting the policies for legal help AI in the future.
As we continue to run interviews with people from across the country about their possible use of AI for legal help tasks, we wanted to share out what we’re learning about people’s thoughts about AI.Please see the full interactive Data Dashboard of interview results here.
Below, find images of the data dashboard. Follow the link above to interact more with the data.
Below, find the results of our interviews as of early 2024.
Participants’ Legal & Technology Capabilities
We asked people to self-assess their ability to solve legal problems and to use the Internet to solve life problems.
We also asked them how often they use the Internet.
Finally, we asked them about their past use of generative AI tools like ChatGPT, Bing/CoPilot, or Bard/Gemini.
Trust & Value of AI to Participants
We asked people at the beginning of the interview how much they would trust what AI would tell them for a legal problem.
We asked them the same question after they tried out an AI tool for a fictional legal problem of getting an eviction notice from their landlord.
We also asked them how helpful the AI was in dealing with the fictional problem, and how likely they would be to use this in the future for similar problems.
Preferences for possible AI tool features
We presented a variety of possible interface & policy changes, that could be made to an AI platform.
We asked the participants to rank the utility of these different possible changes.
Last week, Margaret had the privilege of presenting on the lab’s work on AI and Innovation at the American Academy of Arts and Sciences in Cambridge, Massachusetts.
As a part of the larger conference of Making Justice Accessible, her work was featured on the panel about new solutions to improve the civil justice system through technology.
She discussed how the Lab’s current research and development work around AI has grown out of a larger question about helping people who are increasingly going online to find legal help.
The AI work is an outgrowth of previous work on
improving legal help websites,
auditing and improving search engines’ treatment of legal queries,
working on new ways to present information in more visual and plain language ways, and
building cohorts of providers across regions to have more standardized and discoverable help online.
At the Arizona State University/American Bar Foundation conference on the Future of Justice Work, Margaret Hagan spoke on if and how generative AI might be part of new service and business models to serve people with legal problems.
Many in the audience are already developing new staffing & service models, that combine traditional lawyer-provided services with help provided by community-justice workers.
In the conference’s final session, the panelists discussed how technology — particularly the new generative AI models — might also figure into new initiatives to better reach & serve people struggling with eviction orders, bad living conditions, domestic violence, debt collection, custody problems, and more.
Margaret presented a brief summary of the Legal Design Lab’s work on user research into what people need & want from legal AI, how they currently use AI tools, what justice professionals are brainstorming as possible AI-powered justice work, and metrics and benchmark protocols to evaluate the AI.
This clear listing of the tasks that go into “legal work” and “legal services” that we need to do in AI – -is similar to what people working on new community justice worker models are also doing.
Breaking legal work apart into these tasks can help us think systematically about new, stratified models of delivering services.
Inside of these zones of work, what are the specific tasks that exist (that lawyers and legal org staff currently do, or should be doing)?
Who can and should be best doing this task?
Only Seasoned Lawyers: Which of the tasks can only be done by expert lawyers, with JDs, bar admissions, and multiple years practicing in a given problem area & on this task?
Medium-to-Novice Lawyers: Which of the tasks can be done by medium-to-novice lawyers, with JDs, bar admission, but little to no practice experience in this problem area or on this task (like pro bono volunteers, or new lawyers)?
Seasoned Justice Workers: Which of the tasks can be done by people who are paralegals, advocates, volunteers, social workers, and other community justice workers who have multiple years working this problem area & doing this kind of task?
Medium-to-Novice Justice Workers: Which of the tasks can be done by community justice workers who are new to this problem area & task?
Tech + Lawyer/Justice Worker: Which of these tasks can be done by technology (initial draft/work product) then reviewed by a lawyer or justice worker?
Technology: Which of these tasks can be done technology without human review?
Ideally, our justice community will have more of these discussions about the future of providing services with smart, safe models that can improve capacity & people’s outcomes.
Is your team working on a legal innovation project, using a human-centered design approach? Then you are likely focused on different kinds of ‘users’, ‘stakeholders’, or ‘audience members’ as you plan out your innovation.
Our Legal Design Lab team has a free Canva template to make your own user personas easily.
This Canva template gives blank & example user persona templates that your team can fill in with your interviewees’ and stakeholders’ details. We recommend making multiple personas, and then possibly also making a schematic comparing different kinds of users.
The template document includes example images drawn by Margaret Hagan, that you can use in making your persona.
Have fun & let us know how your user persona creation goes!
Last week, Margaret Hagan traveled to Houston Texas for the National Center for State Court convening of Eviction Diversion Initiative facilitators. She ran a half day workshop on how to use human centered design to improve the program design, paperwork, and Service delivery of eviction diversion help at housing courts around the country.
This Design Workshop built on the several years of design work that the Legal Design Lab has done with courts and legal aid groups across the country, to improve how people are helped when facing eviction and their life outcomes.
Workshop participants, including lawyers, social workers, and court staff who work on running new eviction diversion programs in courts across the country, were able to go through the following sequence:
learning the basics of design mindsets, including focused on your users point of view and creating new experiments to see what can work better
choosing an example user to focus on, creating a Persona to summarize the person’s situation, needs, goals, and preferences
detailing that person’s current user journey through the housing and eviction system, and if they get too good or bad outcomes around housing, money, credit report, and other factors
zooming in on a particular touchpoint on this user journey, where a new intervention could improve the person’s experiences and outcomes
brainstorming many different ways that this problem/opportunity touchpoint could be improved, including with new paperwork, new Service delivery models, new space designs, new cultural or rule shifts, or new technology tools. Participants were shown an array of possible Innovation projects, which they could build on top of
Choosing a handful of the brainstormed ideas, to then bring home to share with colleagues and to try out in short pilots
It was wonderful to work with leaders from across the country, especially those who are so creative, empathetic, and ready to try out new ideas to make the court system work better for normal people.
Downtown HoustonIdentifying User PersonasMapping out User JourneysSpotting OpportunitiesBrainstorming Ideas
Some of the ideas included:
new paperwork that’s more supportive, clear & encouraging
space redesigns in hallways and courtrooms, to make it more human, breathable, polite, and dignified
technology tools that offer coaching and check-ins
data connections to improve efficiencies, and more!
See the presentation slides for the eviction diversion design workshop.