Is your team working on a legal innovation project, using a human-centered design approach? Then you are likely focused on different kinds of ‘users’, ‘stakeholders’, or ‘audience members’ as you plan out your innovation.
Our Legal Design Lab team has a free Canva template to make your own user personas easily.
This Canva template gives blank & example user persona templates that your team can fill in with your interviewees’ and stakeholders’ details. We recommend making multiple personas, and then possibly also making a schematic comparing different kinds of users.
The template document includes example images drawn by Margaret Hagan, that you can use in making your persona.
Have fun & let us know how your user persona creation goes!
Last week, Margaret Hagan traveled to Houston Texas for the National Center for State Court convening of Eviction Diversion Initiative facilitators. She ran a half day workshop on how to use human centered design to improve the program design, paperwork, and Service delivery of eviction diversion help at housing courts around the country.
This Design Workshop built on the several years of design work that the Legal Design Lab has done with courts and legal aid groups across the country, to improve how people are helped when facing eviction and their life outcomes.
Workshop participants, including lawyers, social workers, and court staff who work on running new eviction diversion programs in courts across the country, were able to go through the following sequence:
learning the basics of design mindsets, including focused on your users point of view and creating new experiments to see what can work better
choosing an example user to focus on, creating a Persona to summarize the person’s situation, needs, goals, and preferences
detailing that person’s current user journey through the housing and eviction system, and if they get too good or bad outcomes around housing, money, credit report, and other factors
zooming in on a particular touchpoint on this user journey, where a new intervention could improve the person’s experiences and outcomes
brainstorming many different ways that this problem/opportunity touchpoint could be improved, including with new paperwork, new Service delivery models, new space designs, new cultural or rule shifts, or new technology tools. Participants were shown an array of possible Innovation projects, which they could build on top of
Choosing a handful of the brainstormed ideas, to then bring home to share with colleagues and to try out in short pilots
It was wonderful to work with leaders from across the country, especially those who are so creative, empathetic, and ready to try out new ideas to make the court system work better for normal people.
Some of the ideas included:
new paperwork that’s more supportive, clear & encouraging
space redesigns in hallways and courtrooms, to make it more human, breathable, polite, and dignified
technology tools that offer coaching and check-ins
data connections to improve efficiencies, and more!
See the presentation slides for the eviction diversion design workshop.
Last week, our team at the Legal Design Lab presented at the Legal Services Corporation Innovations in Technology Conference on several topics: how to build better legal help websites, how to improve e-filing and forms in state court systems, and how to use texting to assess brief services for housing matters.
One of the most popular, large sessions we co-ran was on Generative AI and Access to Justice. In this panel to several hundred participants, Margaret Hagan, Quentin Steenhuis at Suffolk LIT Lab, and Hannes Westerman from CyberJustice Lab in Montreal presented on opportunities, user research, demonstrations, and risks and safety of AI and access to justice.
We started the session by doing a poll of the several hundred participants. We asked them a few questions
are you optimistic, pessimistic, or in between on the future of AI and access to justice?
what are you working on in this area, do you have projects to share?
What words come to mind when you think about AI and access to justice?
Margaret presented on opportunities for AI & A2J, user research the Lab is doing on how people use AI for legal problem-solving, and what justice experts have said are the metrics to look at when evaluating AI. Quentin explained Generative AI & demonstrated Suffolk LIT Lab’s document automation AI work. Hannes presented on several AI-A2J projects he has worked on, including JusticeBot for housing law in Quebec, Canada, and LLMediator which he has worked on with Jaromir Savelka and Karim Benyekhlef to improve dispute resolution among people in a conflict.
We then went through a hands-on group exercise to spot AI opportunities in a particular person’s legal problem-solving journey & talk through risks, guardrails, and policies to improve the safety of AI.
Here is some of what we learned at the presentation and some thoughts about moving forward on AI & Access to Justice.
Cautious Optimism about AI & Justice
Many justice professionals, especially the folks who had come to this conference and joined the AI session, are optimistic about Artificial Intelligence’s future impact on the justice system — or are waiting to see. A much smaller percentage is pessimistic about how AI will play out for access to justice.
We had 38 respondents to our poll before our presentation started, and here’s the breakdown of optimism-pessimism.
In our follow-up conversations, we heard regularly that people were excited about the possibility of AI to provide services at scale, affordably to more people (aka, ‘closing the justice gap’) — but that the regulation and controls need to be in place to prevent harm.
Looking at the word cloud of people’s responses to the question “What words come to mind when you think about AI & the justice system?” further demonstrates this cautious optimism (with a focus on smart risk mitigation to empower good, impactful innovation).
This cautious optimism is in contrast to a totally cold, prohibitive approach to AI. If we saw more pessimism, we might have heard more people expressing that there should be no use of AI in the justice system. But at least among the group who attended this conference, we saw little of that perspective (that AI should be avoided or shut down in legal services, for fear of its possibility to harm). Rather, people seemed open to exploring, testing, and collaborating on AI & Access to Justice projects, as long as there was a strong focus on protecting against bias, mistakes, and other risks that could lead to harm of the public.
We need to talk about risks more specifically
That said, despite the pervasive concern about risk and harm, there’s no clear framework on how to protect people from them as of yet.
This could be symptomatic of the broader way that legal services have been regulated in the past. That instead of talking about specific risks, we speak in generalizations about ‘protecting the consumer’. We don’t have a clear typology of what mistakes can happen, what harms can occur, how important/severe these are, and how to protect against them.
Because of this lack of a clear risk framework, most discussions about how to spot risks and harms of AI are general, anecdotal, and high-level. Even if everyone agrees that we need safety rules, including technical and policy-based interventions, we don’t have a clear menu of what those can be.
This is likely to be a large, multi-stakeholder, multi-phased process — but we need more consensus on a working framework of what risks and mistakes to watch out for, how much to prioritize them, and what kinds of things can protect people from them. Hopefully, there will be more government agencies and cross-state consortiums working on these actionable policy frameworks that can encourage responsible innovation.
Demos & User Journeys get to good AI brainstorms
Talking about AI (or brainstorming about it) can be intimidating for non-technical folks in the justice system. They may feel that it’s unclear or difficult to know where to begin when thinking about how AI could help them deliver services better, how clients could benefit, or how it could play a good role in delivering justice.
Demonstration projects, like those shared by Quinten and Hannes, are beneficial to legal aid, court staff, and other legal professionals. These concrete, specific demos allow people to see exactly how AI solutions could play out — and then riff on these demo’s, to think through variations of how AI could help on client tasks, service provider tasks, executive directors’, funders’, or otherwise.
Demo projects don’t have to be live, in-the-field AI efforts. Rather, showing early-stage versions of AI or even more provocative ‘pie-in-the-sky’ AI applications can help spark more creativity in the justice professional community, get into more specific conversations about risks and harms, and help drive momentum to make good responsible innovation happen.
Aside from demo’s of AI projects, user journey exercises can also be a great way to spark a productive brainstorm of opportunities, risks, and policies.
In the second half of the presentation, we ran an interactive workshop. We shared a user story of someone going through a problem with their landlord, in which their Milwaukee apartment had the heat off and it wasn’t getting fixed.
We walked through a status quo user journey, and which they tried to seek legal help, got delayed, made some connections, and got connected with someone to do a demand letter.
We asked all of the participants to work in small groups, to identify where in the status quo user journey, AI could be of help. They brainstormed lots of ideas for the particular touchpoints and stakeholders: for the user, friends and family, community organization, legal aid, and pro bono groups. We then asked them to spot safety risks and possible harms, and finally to propose ways to mitigate these risks.
This kind of specific user journey and case-type exercise can help people more clearly see how they could apply the general things they’re learning about AI, to specific legal services. It inspires more creativity and gets more common collaboration to happen about where the priority should be.
Need for a Common AI-A2J Agenda of Tasks
During our exercise and follow-up conversations, we saw a laundry list emerge of possible ways AI could help different stakeholders in the justice system. This flare-out of ideas is exciting but also overwhelming.
Which ideas are worth pursuing, funding, and piloting first?
We need a working agenda of AI and Access to Justice tasks. Participants discussed many different kinds of tasks that AI could help with:
drafting demand letters,
doing smarter triage,
referral of people to services that can be a good fit,
screening frequent, repeat plaintiffs’ filings for their accuracy and legitimacy,
providing language access,
sending reminders and empowerment coaching
assisting people fill in forms, and beyond.
It’s great that there are so many different ideas about how AI could be helpful, but to get more collaboration from computer scientists and technologists, we need to have a clear set of goals, prioritizing among these tasks.
Ideally, this common task list would be organized around what is feasible and impactful for service providers and community members. This task list could attract more computer scientists to help us build, fine-tune, test, and Implement generative AI that can achieve these tasks.
Our group at Legal Design Lab is hard at work compiling this possible list of high-impact, lower-risk AI and Access to Justice tasks. We will be making it into a survey, and having as many people in the justice professional community rank which tasks would be the most impactful if AI could do them.
This prioritized task list will then be useful in getting more AI technologists and academic partners, to see if and how we can build these models, what benchmarks we should use to evaluate them, and how we can start doing limited, safety-focused pilots of them in the field.
Join our community!
Our group will be continuing to work on building a strong community around AI and access to justice, research projects, models, and interdisciplinary collaborations. Please stay in touch with us at this link, and sign up here to stay notified about what we’re doing.
In December 2023, our lab hosted a half-day workshop on AI for Legal Help.
Our policy lab class of law students, master students, and undergraduates presented their user research findings from their September through December research.
Our guests, including those from technology companies, universities, state bars, legal aid groups, community-based organizations, and advocacy/think takes, all worked together in break-out sessions to tackle some of the big policy and legal opportunities around AI in the space.
We thank our main class partners, the Technology Initiative Grant team from the Legal Services Corporation, for assisting us with the direction and main feedback to our class user research work.
The Stanford Legal Design Lab & the Rhode Center on the Legal Profession have just released the Filing Fairness Toolkit.
The toolkit covers 4 areas, with diagnostics, maturity models, and actionable guidance for:
improving Filing Technology Infrastructure
building a healthy Filing Partner Ecosystem
establishing good Technology Governance
refining Forms & Filing Processes
This Toolkit is the product of several years of work, design sessions, collaborations with courts and vendors across the country, and stakeholder interviews. It is for court leaders, legal tech companies, legal aid groups, and government officials who are looking for practical guidance on how to make sure that people can find, complete, and file court forms.
Check out our diagnostic tool to see how your local court system measures up to national best practices in forms, efiling, and services.
We know efiling and court technology can be confusing (if not intimidating). We’ve worked hard to make these technical terms & processes more accessible to people beyond IT staff. Getting better efiling systems in place can unlock new opportunities for access to justice.
Please let us know if you have questions, ideas, and stories about making forms, efiling, and other court tech infrastructure more accessible, user-friendly, and impactful.
Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions.
We are pleased to announce our final schedule for the workshop:
Schedule for the AI & A2J Workshop
Morning Sessions
Welcome Kickoff, 09:00-09:15
Conference organizers welcome everyone, lead introductions, and review the day’s plan.
1: AI-A2J in Practice, 09:15-10:30 AM
09:15-09:30: Juan David Gutierrez: AI technologies in the judiciary: Critical appraisal of LLMs in judicial decision making
09:30-09:45: Ransom Wydner, Sateesh Nori, Eliza Hong, Sam Flynn, and Ali Cook: AI in Access to Justice: Coalition-Building as Key to Practical and Sustainable Applications
09:45-10:00: Mariana Raquel Mendoza Benza: Insufficient transparency in the use of AI in the judiciary of Peru and Colombia: A challenge to digital transformation
10:00-10:15: Vanja Skoric, Giovanni Sileno, and Sennay Ghebreab: Leveraging public procurement for LLMs in the public sector: Enhancing access to justice responsibly
10:15-10:30: Soumya Kandukuri: Building the AI Flywheel in the American Judiciary
Break: 10:30-11:00
2: AI for A2J Advice, Issue-Spotting, and Engagement Tasks, 11:00-12:30
11:00: Opening remarks to the session
11:05-11:20: Sam Harden: Rating the Responses to Legal Questions by Generative AI Models
11:20-11:35: Margaret Hagan: Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories
11:35-11:50: Nick Goodson and Rongfei Lui: Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process
11:50-12:05: Nina Toivonen, Marika Salo-Lahti, Mikko Ranta, and Helena Haapio, Beyond Debt: The Intersection of Justice, Financial Wellbeing and AI
12:05-12:15: Amit Haim: Large Language Models and Legal Advice12:15-12:30: General Discussions, Takeaways, and Next Steps on AI for Advice
Break: 12:30-13:30
Afternoon Sessions
3: AI for Forms, Contracts & Dispute Resolution, 13:30-15:00
13:30: Opening remarks to this session13:35-13:50: Quinten Steenhuis, David Colarusso, and Bryce Wiley: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications
13:50-14:05: Katie Atkinson, David Bareham, Trevor Bench-Capon, Jon Collenette, and Jack Mumford: Tackling the Backlog: Support for Completing and Validating Forms
14:05-14:20: Anne Ketola, Helena Haapio, and Robert de Rooy: Chattable Contracts: AI Driven Access to Justice
14:20-14:30: Nishat Hyder-Rahman and Marco Giacalone: The role of generative AI in increasing access to justice in family (patrimonial) law
14:30-15:00: General Discussions, Takeaways, and Next Steps on AI for Forms & Dispute Resolution
Break: 15:00-15:30
4: AI-A2J Technical Developments, 15:30-16:30
15:30: welcome to session 15:35-15:50: Marco Billi, Alessandro Parenti, Giuseppe Pisano, and Marco Sanchi: A hybrid approach of accessible legal reasoning through large language models 15:50-16:05: Bartosz Krupa – Polish BERT legal language model 16:05-16:20: Jakub Dråpal – Understanding Criminal Courts 16:20-16:30: General Discussion on Technical Developments in AI & A2J
Closing Discussion: 16:30-17:00
What are the connections between the sessions? What next steps do participants think will be useful? What new research questions and efforts might emerge from today?
The team had set up a novel system to recruit court users to give feedback about their experience attending court in-person or remotely, combining that with administrative data and observational data about how the hearings proceeded. This allows them to examine the various effects, preferences, and outcomes that are at play now that online/remote/Zoom court proceedings are now available.
Explore some of the findings that the team found with Indiana court users, including
the technological capability and usage of litigants
comparison of preferences for remote hearings vs in person
how people participated in remote hearings
how frequently litigants experienced technical issues
what kinds of concerns and dynamics were interrelated
As new services and tech projects launch to serve the public, there’s a regular question being asked:
How do we measure if these new justice innovations do better than the status quo?
How can we compare the risk of harm to the consumers by these new services & technologies, as compared to a human lawyer — or compared to no services at all?
This entails diving into the discussion of legal services mistakes, risks, harms, errors, complaints, and problems. Past discussions of these legal service problems tend to be fairly abstract. Many regulators & industry groups focus on consumer protection at the high level: how can we protect people from low-quality, fraudulent, or problematic legal services?
This high-level discussion of legal service problems doesn’t lend itself well to specific measurements. It’s hard to assess whether a given lawyer, justice worker, app, or other service-tech tool is more or less protective of a consumer’s interest.
I’ve been thinking a lot about how we can more systematically and clearly measure the quality level (and risk of harm) of a given legal service. As I’ve been exploring & auditing AI platforms for legal problem-solving, this systematic evaluation is needed to be able to assess the quality issues on these AI platforms.
Measuring Errors vs Measuring Consequences
As I’ve been reading through work in other areas (particularly health information and medical systems), I’ve found the work of medical & information researchers to be very instructive. See one such article here.
One of the big things I have learned from medical safety analysis has been the importance of separating the Mistake/Error from the Harm/Consequence. Medical domain experts have built 2 sets of resources:
This is somewhat of a revelation: to separate the provider error from the user harm. Not all errors result in harm — and not all harms have the same severity & level of concern.
As I am studying AI provision of legal services is that AI might make an error, but this does not always result in harm. For example, the AI might tell a person the wrong timeline around eviction lawsuits. The person might screenshot this incorrect AI response and send it their landlord – “I actually have 60 days to pay back rent before you can sue me – see what ChatGPT says!”. The landlord might cave, and give that person 60 days to pay back rent. The user hasn’t experienced harm, even though there was an error. That’s why it’s worthwhile to separate these problems into the Mistake and the Harm.
Planning out a protocol to measure legal services errors & harms
Here is how I have been developing mistake-harm protocol, to assess legal services (including AI platforms answering people’s questions). Here is a first draft, that I invite feedback to:
Step 1: Categorize what Legal Service Interaction you’re assessing. Does the legal service interaction fit into one of these common categories?
Provision of info and advice in response to a client’s description of their problem, including statement of law, listing of options, providing plan of steps to take (common in brief services, hotlines, chats, AI)
Filling in a document or paperwork that will be given to court or other party, including selection of claims/defenses
Intake, screening about whether the service can help you
Prep and advocacy in a meeting, hearing, mediation, or similar
Negotiation, Assessment of options, and Decision advice on key choices
(Meta) Case Management of the person’s problem, journey through the system
(Meta) Pricing, billing, and management of charges/payments for the service
Step 2: Categorize what Problem or Mistake has happened in this interaction (with the thought that we’ll have different common problems that happen in these different service interactions above)Preliminary list of problems/mistakes
Provider supplies incorrect (hallucinated, incorrect jdx, out of date, etc) info about the law, procedure, etc
Provider supplies correct info, but in a way that user does not understand enough to make wise choice
User misinterprets the provider’s response
Provider provides biased information or advice
User experiences provider as offensive, lack of dignity/respect, hurtful to their identity
Provider incorrectly shares private data from user
Provider is unreasonably slow
Provider charges unreasonable amount for service
Step 3: Identify if any Harm or Consequence occurred because of the problem. Acknowledging that not all of the situations above result in any harm at all – or that there are different degrees of harm.Possible harms that user or broader community might experience if the problems above occur.
User does not raise a claim or defense that they are entitled to, and might have gotten them a better legal judgment/outcome.
User raises an inapplicable claim, cites an incorrect law, brings in inadmissible evidence – makes a substantive or procedural mistake that might delay their case, increase their costs, or lead to a bad legal judgment.
User spends $ unnecessarily on a legal service.
User’s legal process is longer and costlier than needed.
User brings claim with low likelihood of success, and goes through an unnecessary legal process.
User’s conflict with other party worsens, and the legal process becomes lengthier, more expensive, more acrimonious, and less likely to improve their (or their family’s) social/financial outcomes.
User feels legal system is inaccessible. They are less likely to use legal services, court system, or government agency services in future problems.
On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access.
This presentation was part of a larger discussion about research projects, a learning community of judges, and evidence-based court policy and rules changes. What can courts, legal aid, groups, and statewide justice agencies be doing to best serve people with legal problems in their communities?
Margaret’s presentation covered the initial user research that the lab has been conducting, about how different members of the public think about AI platforms in regards to legal problem-solving and how they use these platforms to deal with problems like evictions. The presentation also spotlit the concerning trends, mistakes, and harms around public use of AI for legal problem-solving, which justice institutions and technology companies should focus on in order to prevent consumer harms while harnessing the opportunity of AI to help people understand the law and take action to resolve their legal problems.
The discussion after the presentation covered topics like:
Is there a way for justice actors to build a more authoritative legal info AI model, especially with key legal information about local laws and rights, court procedures and timelines, court forms, and service organizations contact details? This might help the AI platforms, avoid mistaken, information or hallucinated details.
How could researchers measure the benefits and harms of AI provided legal answers, compared to legal expert-provided legal answers, compared to no services at all? Aside from anecdotes and small samples, is there a more deliberate way to analyze the performance of AI platforms, when it comes to answering peoples questions about the law, procedures, forms, and services? This might include systematically measuring how often these platforms make mistakes, categorizing exactly what the mistakes are, and estimating, or measuring how much harm emerges from these mistakes. A similar deliberate protocol might be done for the benefits that these platforms provide.
Are you a legal aid lawyer, court staff member, judge, academic, tech developer, computer science researcher, or community advocate interested in how AI might increase Access to Justice — and also what limits and accountability we must establish so that it is equitable, responsible, and human-centered?
Sign up at this interest form to stay in the loop with our work at Stanford Legal Design Lab on AI & Access to Justice.
We will be sending those on this list updates on:
Events that we will be running online and in person
Publications, research articles, and toolkits
Opportunities for partnerships, funding, and more
Requests for data-sharing, pilot initiatives, and other efforts
Please be in touch through the form — we look forward to connecting with you!