This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.
Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.
Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.
Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment.
We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.
Stay tuned for the results of our expert-led rating games and feedback sessions!
If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.
Our team is excited to announce the new, 2024-25 version of our ongoing class, AI for Legal Help. This school year, we’re moving from background user and expert research towards AI R&D and pilot development.
Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate?
In this course, students will design and develop new demonstration AI projects and pilot plans, combining human-centered design, tech & data work, and law & policy knowledge.
Students will work on interdisciplinary teams, each partnered with frontline legal aid and court groups interested in using AI to improve their public services. Student teams will help their partners scope specific AI projects, spot and mitigate risks, train a model, test its performance, and think through a plan to safely pilot the AI.
By the end of the class, students and their partners will co-design new tech pilots to help people dealing with legal problems like evictions, reentry from the criminal justice system, debt collection, and more.
Students will get experience in human-centered AI development, and critical thinking about if and how technology projects can be used in helping the public with a high-stakes legal problem. Along with their AI pilot, teams will establish important guidelines to ensure that new AI projects are centered on the needs of people, and developed with a careful eye towards ethical and legal principles.
At the April 2024 Stanford Codex FutureLaw Conference, our team at Legal Design Lab both presented the research findings about users’ and subject matter experts’ approaches to AI for legal help, and to lead a half-day interdisciplinary workshop on what future directions are possible in this space.
Many of the audience members in both sessions were technologists interested in the legal space, who are not necessarily familiar with the problems and opportunities for legal aid groups, courts, and people with civil legal problems. Our goal was to help them understand the “access to justice” space and spot opportunities to which their development & research work could relate.
Some of the ideas that emerged in our hands-on workshop included the following possible AI + A2J innovations:
AI to Scan Scary Legal Documents
Several groups identified that AI could help a person, who has received an intimidating legal document — a notice, a rap sheet, an immigration letter, a summons and complaint, a judge’s order, a discovery request, etc. AI could let them take a picture of the document, synthesize the information, present it back with a summary of what it’s about, what important action items are, and how to get started on dealing with it.
It could make this document interactive through FAQs, service referrals, or a chatbot that lets a person understand and respond to it. It could help people take action on these important but off-putting documents, rather than avoid them.
Using AI for Better Gatekeeping of Eviction Notices & Lawsuits
One group proposed that a future AI-powered system could screen possible eviction notices or lawsuit filings, to check if the landlord or property manager has fulfilled all obligations and m
Landlords must upload notices.
AI tools review the notice: is it valid? have they done all they can to comply with legal and policy requirements? is there any chance to promote cooperative dispute resolution at this early stage?
If the AI lives at the court clerk level, it might help court staff better detect errors, deficiencies, and other problems that better help them allocate limited human review.
AI to empower people without lawyers to respond to a lawsuit
In addition, AI could help the respondent (tenant) prepare their side, helping them to present evidence, prep court documents, understand court hearing expectations, and draft letters or forms to send.
Future AI tools could help them understand their case, make decisions, and get work product created with little burden.
With a topic like child support modification, AI could help a person negotiate a resolution with the other party, or do a trial run to see how a possible negotiation might go. It could also change their tone, to take a highly emotional negotiation request and transform it to be more likely to get a positive, cooperative reply from the other party.
AI to make Legal Help Info More Accessible
Another group proposed that AI could be integrated into legal aid, law library, and court help centers to:
Better create and maintain inter-organization referrals, so there are warm handoffs and not confusing roundabouts when people seek help
Clearer, better maintained, more organized websites for a jurisdiction, with the best quality resources curated and staged for easy navigation
Multi-modal presentations, to make information available in different visual presentations and languages
Providing more information in speech-to-text format, conversational chats, and across different dialects. This was especially highlighted in immigration legal services.
AI to upskill students & pro bono clinics
Several groups talked about AI for training and providing expert guidance to staff, law students, and pro bono volunteers to improve their capacity to serve members of the public.
AI tools could be used in simulations to better educate people in a new legal practice area, and also supplement their knowledge when providing services. Expert practitioners can supply knowledge to the tools, that can then be used by novice practitioners so that they can provide higher-quality services more efficiently in pro bono or law student clinics.
AI could also be used in community centers or other places where community justice workers operate, to get higher quality legal help to people who don’t have access to lawyers or who do not want to use lawyers.
AI to improve legal aid lawyers’ capacity
Several groups proposed AI that could be used behind-the-scenes by expert legal aid or court help lawyers. They could use AI to automate, draft, or speed up the work that they’re already doing. This could include:
Improving intake, screening, routing, and summaries of possible incoming cases
Drafting first versions of briefs, forms, affidavits, requests, motions, and other legal writing
Documenting their entire workflow & finding where AI can fit in.
Cross-Cutting action items for AI+ A2J
Across the many conversations, some common tasks emerged that cross different stakeholders and topics.
Reliable AI Benchmarks:
We as a justice community need to establish solid benchmarks to test AI effectiveness. We can use these benchmarks to focus on relevant metrics.
In addition, we need to regularly report on and track AI performance at different A2J tasks.
This can help us create feedback loops for continuous improvement.
Data Handling and Feedback:
The community needs reliable strategies and rules for how to do AI work that respects obligations for confidentiality and privacy.
Can there be more synthetic datasets that still represent what’s happening in legal aid and court practice, so they don’t need to share actual client information to train models?
Can there be better Personally Identifiable Information (PII) redaction for data sharing?
Who can offer guidance on what kinds of data practices are ethical and responsible?
Low-Code AI Systems:
The justice community is never going to have large tech, data, or AI working groups within their legal aid or court organization. They are going to need low-code solutions that will let them deploy AI systems, fine-tune them, and maintain them without a huge technical requirement.
Overall, the presentation, Q&A, and workshop all pointed to enthusiasm for responsible innovation in the AI+A2J space. Tech developers, legal experts, and strategists are excited about the opportunity to improve access to justice through AI-driven solutions, and enhance efficiency and effectiveness in legal aid. With more brainstormed ideas for solutions in this space, now it is time to move towards R&D incubation that can help us understand what is feasible and valuable in practice.
In mid-April, Margaret Hagan presented on the Lab’s research and development efforts around AI and access to justice at the Legal Services Corporation 50th anniversary forum. This large gathering of legal aid executive directors, national justice leaders, members of Congress, philanthropists, and corporate leaders celebrated the work of LSC & profiled future directions of legal services.
Margaret was on a panel along with legal aid leader Sateesh Nori, Suffolk Law School Dean Andy Perlman, and former LSC president James Sandman.
She presented 3 big takeaways for the audience, about how to approach if and how AI should be used to close on the justice gap — especially to move beyond gut reactions & anecdotes that tend towards too much optimism or skepticism. Based on the lab’s research and design activities she proposed 3 big shifts for civil justice leaders towards generative AI.
Shift 1: Towards Techno-Realism
This shift away from hardline camps about too much optimism or pessimism about AI’s potential futures can lead us to more empirical, detailed work. Where are the specific tasks where AI can be helpful? Can we demonstrate with lab studies and controlled pilots exactly if AI can perform better than humans at these specific tasks — with equal or higher quality, and efficiency? This move towards applied research can lead towards more responsible innovation, rather than rushing towards AI applications too quickly or chilling the innovation space pre-emptively.
Shift 2: From Reactive to Proactive leadership
The second shift is how lawyers and justice professionals approach the world of AI. Will they be reactive to what technologists put out to the public, trying to create the right mix of norms, lawsuits, and regulations that can try to push AI towards being safe enough, and also quality enough for legal use cases?
Instead, they can be proactive. They can be running R&D cohorts to see what AI is good at, what risks and harms emerge in these test applications, and then work with AI companies and regulators to better encourage the AI strengths and mitigate its risks. This means joining together with technologists (especially those at universities and benefit corporations) to do hands-on, exploratory demonstration project development to better inform investments, regulation, and other policy-making on AI for justice use cases.
Shift 3: Local Pilots to Coordinated Network
The final shift is about how innovators work. Legal aid groups or court staff could launch AI pilots on their own, building out a new application or bot for their local jurisdiction, and then share it at upcoming conferences to let others know about it. Or, from the beginning, they could be crafting their technical system, UX design, vendor relationships, data management, and safety evaluations in concert with others around the country who are working on similar efforts. Even if the ultimate application is run and managed locally, much of the infrastructure can be shared in national cohorts. These national cohorts can also help gather data, experiences, risk/harm incidents, and other important information that can help guide task forces, attorneys general, tech companies, and others setting the policies for legal help AI in the future.
As we continue to run interviews with people from across the country about their possible use of AI for legal help tasks, we wanted to share out what we’re learning about people’s thoughts about AI.Please see the full interactive Data Dashboard of interview results here.
Below, find images of the data dashboard. Follow the link above to interact more with the data.
Below, find the results of our interviews as of early 2024.
Participants’ Legal & Technology Capabilities
We asked people to self-assess their ability to solve legal problems and to use the Internet to solve life problems.
We also asked them how often they use the Internet.
Finally, we asked them about their past use of generative AI tools like ChatGPT, Bing/CoPilot, or Bard/Gemini.
Trust & Value of AI to Participants
We asked people at the beginning of the interview how much they would trust what AI would tell them for a legal problem.
We asked them the same question after they tried out an AI tool for a fictional legal problem of getting an eviction notice from their landlord.
We also asked them how helpful the AI was in dealing with the fictional problem, and how likely they would be to use this in the future for similar problems.
Preferences for possible AI tool features
We presented a variety of possible interface & policy changes, that could be made to an AI platform.
We asked the participants to rank the utility of these different possible changes.
Last week, Margaret had the privilege of presenting on the lab’s work on AI and Innovation at the American Academy of Arts and Sciences in Cambridge, Massachusetts.
As a part of the larger conference of Making Justice Accessible, her work was featured on the panel about new solutions to improve the civil justice system through technology.
She discussed how the Lab’s current research and development work around AI has grown out of a larger question about helping people who are increasingly going online to find legal help.
The AI work is an outgrowth of previous work on
improving legal help websites,
auditing and improving search engines’ treatment of legal queries,
working on new ways to present information in more visual and plain language ways, and
building cohorts of providers across regions to have more standardized and discoverable help online.
At the Arizona State University/American Bar Foundation conference on the Future of Justice Work, Margaret Hagan spoke on if and how generative AI might be part of new service and business models to serve people with legal problems.
Many in the audience are already developing new staffing & service models, that combine traditional lawyer-provided services with help provided by community-justice workers.
In the conference’s final session, the panelists discussed how technology — particularly the new generative AI models — might also figure into new initiatives to better reach & serve people struggling with eviction orders, bad living conditions, domestic violence, debt collection, custody problems, and more.
Margaret presented a brief summary of the Legal Design Lab’s work on user research into what people need & want from legal AI, how they currently use AI tools, what justice professionals are brainstorming as possible AI-powered justice work, and metrics and benchmark protocols to evaluate the AI.
This clear listing of the tasks that go into “legal work” and “legal services” that we need to do in AI – -is similar to what people working on new community justice worker models are also doing.
Breaking legal work apart into these tasks can help us think systematically about new, stratified models of delivering services.
Inside of these zones of work, what are the specific tasks that exist (that lawyers and legal org staff currently do, or should be doing)?
Who can and should be best doing this task?
Only Seasoned Lawyers: Which of the tasks can only be done by expert lawyers, with JDs, bar admissions, and multiple years practicing in a given problem area & on this task?
Medium-to-Novice Lawyers: Which of the tasks can be done by medium-to-novice lawyers, with JDs, bar admission, but little to no practice experience in this problem area or on this task (like pro bono volunteers, or new lawyers)?
Seasoned Justice Workers: Which of the tasks can be done by people who are paralegals, advocates, volunteers, social workers, and other community justice workers who have multiple years working this problem area & doing this kind of task?
Medium-to-Novice Justice Workers: Which of the tasks can be done by community justice workers who are new to this problem area & task?
Tech + Lawyer/Justice Worker: Which of these tasks can be done by technology (initial draft/work product) then reviewed by a lawyer or justice worker?
Technology: Which of these tasks can be done technology without human review?
Ideally, our justice community will have more of these discussions about the future of providing services with smart, safe models that can improve capacity & people’s outcomes.
Last week, our team at the Legal Design Lab presented at the Legal Services Corporation Innovations in Technology Conference on several topics: how to build better legal help websites, how to improve e-filing and forms in state court systems, and how to use texting to assess brief services for housing matters.
One of the most popular, large sessions we co-ran was on Generative AI and Access to Justice. In this panel to several hundred participants, Margaret Hagan, Quentin Steenhuis at Suffolk LIT Lab, and Hannes Westerman from CyberJustice Lab in Montreal presented on opportunities, user research, demonstrations, and risks and safety of AI and access to justice.
We started the session by doing a poll of the several hundred participants. We asked them a few questions
are you optimistic, pessimistic, or in between on the future of AI and access to justice?
what are you working on in this area, do you have projects to share?
What words come to mind when you think about AI and access to justice?
Margaret presented on opportunities for AI & A2J, user research the Lab is doing on how people use AI for legal problem-solving, and what justice experts have said are the metrics to look at when evaluating AI. Quentin explained Generative AI & demonstrated Suffolk LIT Lab’s document automation AI work. Hannes presented on several AI-A2J projects he has worked on, including JusticeBot for housing law in Quebec, Canada, and LLMediator which he has worked on with Jaromir Savelka and Karim Benyekhlef to improve dispute resolution among people in a conflict.
We then went through a hands-on group exercise to spot AI opportunities in a particular person’s legal problem-solving journey & talk through risks, guardrails, and policies to improve the safety of AI.
Here is some of what we learned at the presentation and some thoughts about moving forward on AI & Access to Justice.
Cautious Optimism about AI & Justice
Many justice professionals, especially the folks who had come to this conference and joined the AI session, are optimistic about Artificial Intelligence’s future impact on the justice system — or are waiting to see. A much smaller percentage is pessimistic about how AI will play out for access to justice.
We had 38 respondents to our poll before our presentation started, and here’s the breakdown of optimism-pessimism.
In our follow-up conversations, we heard regularly that people were excited about the possibility of AI to provide services at scale, affordably to more people (aka, ‘closing the justice gap’) — but that the regulation and controls need to be in place to prevent harm.
Looking at the word cloud of people’s responses to the question “What words come to mind when you think about AI & the justice system?” further demonstrates this cautious optimism (with a focus on smart risk mitigation to empower good, impactful innovation).
This cautious optimism is in contrast to a totally cold, prohibitive approach to AI. If we saw more pessimism, we might have heard more people expressing that there should be no use of AI in the justice system. But at least among the group who attended this conference, we saw little of that perspective (that AI should be avoided or shut down in legal services, for fear of its possibility to harm). Rather, people seemed open to exploring, testing, and collaborating on AI & Access to Justice projects, as long as there was a strong focus on protecting against bias, mistakes, and other risks that could lead to harm of the public.
We need to talk about risks more specifically
That said, despite the pervasive concern about risk and harm, there’s no clear framework on how to protect people from them as of yet.
This could be symptomatic of the broader way that legal services have been regulated in the past. That instead of talking about specific risks, we speak in generalizations about ‘protecting the consumer’. We don’t have a clear typology of what mistakes can happen, what harms can occur, how important/severe these are, and how to protect against them.
Because of this lack of a clear risk framework, most discussions about how to spot risks and harms of AI are general, anecdotal, and high-level. Even if everyone agrees that we need safety rules, including technical and policy-based interventions, we don’t have a clear menu of what those can be.
This is likely to be a large, multi-stakeholder, multi-phased process — but we need more consensus on a working framework of what risks and mistakes to watch out for, how much to prioritize them, and what kinds of things can protect people from them. Hopefully, there will be more government agencies and cross-state consortiums working on these actionable policy frameworks that can encourage responsible innovation.
Demos & User Journeys get to good AI brainstorms
Talking about AI (or brainstorming about it) can be intimidating for non-technical folks in the justice system. They may feel that it’s unclear or difficult to know where to begin when thinking about how AI could help them deliver services better, how clients could benefit, or how it could play a good role in delivering justice.
Demonstration projects, like those shared by Quinten and Hannes, are beneficial to legal aid, court staff, and other legal professionals. These concrete, specific demos allow people to see exactly how AI solutions could play out — and then riff on these demo’s, to think through variations of how AI could help on client tasks, service provider tasks, executive directors’, funders’, or otherwise.
Demo projects don’t have to be live, in-the-field AI efforts. Rather, showing early-stage versions of AI or even more provocative ‘pie-in-the-sky’ AI applications can help spark more creativity in the justice professional community, get into more specific conversations about risks and harms, and help drive momentum to make good responsible innovation happen.
Aside from demo’s of AI projects, user journey exercises can also be a great way to spark a productive brainstorm of opportunities, risks, and policies.
In the second half of the presentation, we ran an interactive workshop. We shared a user story of someone going through a problem with their landlord, in which their Milwaukee apartment had the heat off and it wasn’t getting fixed.
We walked through a status quo user journey, and which they tried to seek legal help, got delayed, made some connections, and got connected with someone to do a demand letter.
We asked all of the participants to work in small groups, to identify where in the status quo user journey, AI could be of help. They brainstormed lots of ideas for the particular touchpoints and stakeholders: for the user, friends and family, community organization, legal aid, and pro bono groups. We then asked them to spot safety risks and possible harms, and finally to propose ways to mitigate these risks.
This kind of specific user journey and case-type exercise can help people more clearly see how they could apply the general things they’re learning about AI, to specific legal services. It inspires more creativity and gets more common collaboration to happen about where the priority should be.
Need for a Common AI-A2J Agenda of Tasks
During our exercise and follow-up conversations, we saw a laundry list emerge of possible ways AI could help different stakeholders in the justice system. This flare-out of ideas is exciting but also overwhelming.
Which ideas are worth pursuing, funding, and piloting first?
We need a working agenda of AI and Access to Justice tasks. Participants discussed many different kinds of tasks that AI could help with:
drafting demand letters,
doing smarter triage,
referral of people to services that can be a good fit,
screening frequent, repeat plaintiffs’ filings for their accuracy and legitimacy,
providing language access,
sending reminders and empowerment coaching
assisting people fill in forms, and beyond.
It’s great that there are so many different ideas about how AI could be helpful, but to get more collaboration from computer scientists and technologists, we need to have a clear set of goals, prioritizing among these tasks.
Ideally, this common task list would be organized around what is feasible and impactful for service providers and community members. This task list could attract more computer scientists to help us build, fine-tune, test, and Implement generative AI that can achieve these tasks.
Our group at Legal Design Lab is hard at work compiling this possible list of high-impact, lower-risk AI and Access to Justice tasks. We will be making it into a survey, and having as many people in the justice professional community rank which tasks would be the most impactful if AI could do them.
This prioritized task list will then be useful in getting more AI technologists and academic partners, to see if and how we can build these models, what benchmarks we should use to evaluate them, and how we can start doing limited, safety-focused pilots of them in the field.
Join our community!
Our group will be continuing to work on building a strong community around AI and access to justice, research projects, models, and interdisciplinary collaborations. Please stay in touch with us at this link, and sign up here to stay notified about what we’re doing.
On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access.
This presentation was part of a larger discussion about research projects, a learning community of judges, and evidence-based court policy and rules changes. What can courts, legal aid, groups, and statewide justice agencies be doing to best serve people with legal problems in their communities?
Margaret’s presentation covered the initial user research that the lab has been conducting, about how different members of the public think about AI platforms in regards to legal problem-solving and how they use these platforms to deal with problems like evictions. The presentation also spotlit the concerning trends, mistakes, and harms around public use of AI for legal problem-solving, which justice institutions and technology companies should focus on in order to prevent consumer harms while harnessing the opportunity of AI to help people understand the law and take action to resolve their legal problems.
The discussion after the presentation covered topics like:
Is there a way for justice actors to build a more authoritative legal info AI model, especially with key legal information about local laws and rights, court procedures and timelines, court forms, and service organizations contact details? This might help the AI platforms, avoid mistaken, information or hallucinated details.
How could researchers measure the benefits and harms of AI provided legal answers, compared to legal expert-provided legal answers, compared to no services at all? Aside from anecdotes and small samples, is there a more deliberate way to analyze the performance of AI platforms, when it comes to answering peoples questions about the law, procedures, forms, and services? This might include systematically measuring how often these platforms make mistakes, categorizing exactly what the mistakes are, and estimating, or measuring how much harm emerges from these mistakes. A similar deliberate protocol might be done for the benefits that these platforms provide.
Are you a legal aid lawyer, court staff member, judge, academic, tech developer, computer science researcher, or community advocate interested in how AI might increase Access to Justice — and also what limits and accountability we must establish so that it is equitable, responsible, and human-centered?
Sign up at this interest form to stay in the loop with our work at Stanford Legal Design Lab on AI & Access to Justice.
We will be sending those on this list updates on:
Events that we will be running online and in person
Publications, research articles, and toolkits
Opportunities for partnerships, funding, and more
Requests for data-sharing, pilot initiatives, and other efforts
Please be in touch through the form — we look forward to connecting with you!