Categories
Class Blog Design Research

3 Kinds of Access to Justice Conflicts

(And the Different Ways to Design for Them)

by Margaret Hagan

In the access to justice world, we often talk about “the justice gap” as if it’s one massive, monolithic challenge. But if we want to truly serve the public, we need to be more precise. People encounter different kinds of legal problems, with different stakes, emotional dynamics, and system barriers. And those differences matter.

At the Legal Design Lab, we find it helpful to divide the access to justice landscape into three distinct types of problems. Each has its own logic — and each requires different approaches to research, design, technology, and intervention.

3 Types of Conflicts that we talk about when we talk about Access to Justice

1. David vs. Goliath Conflicts

This is the classic imbalance. An individual — low on time, legal knowledge, money, or support — faces off against a repeat player: a bank, a corporate landlord, a debt collector, or a government agency.

These Goliaths have teams of lawyers, streamlined filing systems, institutional knowledge, predictive data, and now increasingly, AI-powered legal automation and strategies. They can file thousands of cases a month — many of which go uncontested because people don’t understand the process, can’t afford help, or assume there’s no point trying.

This is the world of:

  • Eviction lawsuits from corporate landlords
  • Mass debt collection actions
  • Robo-filed claims, often incorrect but rarely challenged

The problem isn’t just unfairness — it’s non-participation. Most “Davids” default. They don’t get their day in court. And as AI makes robo-filing even faster and cheaper, we can expect the imbalance in knowledge, strategy, and participation may grow worse.

What Goliath vs. David Conflicts need

Designing for this space means understanding the imbalance and structuring tools to restore procedural fairness. That might mean:

  • Tools that help people respond before defaulting. These could be pre-filing defense tools that detect illegal filings or notice issues. It could also be tools that prepare people to negotiate from a stronger position — or empower them to respond before defaulting.
  • Systems that detect and challenge low-quality filings. It could also involve systems that flag repeat abusive behavior from institutional actors.
  • Interfaces that simplify legal documents into plain language. Simplified, visual tools to help people understand their rights and the process quickly.
  • Research into procedural justice and scalable human-AI support models

2. Person vs. Person Conflicts

This second type of case is different. Here, both parties are individuals, and neither has a lawyer.

In this world, both sides are unrepresented and lack institutional or procedural knowledge. There’s real conflict — often with emotional, financial, or relational stakes — but neither party knows how to navigate the system.

Think about emotionally charged, high-stakes cases of everyday life:

  • Family law disputes (custody, divorce, child support)
  • Mom-and-pop landlord-tenant disagreements
  • Small business vs. customer conflicts
  • Neighbor disputes and small claims lawsuits

Both people are often confused. They don’t know which forms to use, how to prepare for court, how to present evidence, or what will persuade a judge. They’re frustrated, emotional, and worried about losing something precious — time with their child, their home, their reputation. The conflict is real and felt deeply, but both sides are likely confused about the legal process.

Often, these conflicts escalate unnecessarily — not because the people are bad, but because the system offers them no support in finding resolution. And with the rise of generative AI, we must be cautious: if each person gets an AI assistant that just encourages them to “win” and “fight harder,” we could see a wave of escalation, polarization, and breakdowns in courtrooms and relationships.

We have to design for a future legal system that might, with AI usage increasing, become more adversarial, less just, and harder to resolve.

What Person Vs. Person Justice Conflicts Need

In person vs. person conflicts, the goal should be to get to mutual resolutions that avoid protracted ‘high’ conflict. The designs needed are about understanding and navigation, but also about de-escalation, emotional intelligence, and procedural scaffolding.

  • Tools that promote resolution and de-escalation, not just empowerment. They can ideally support shared understanding and finding a solution that can work for both parties.
  • Shared interfaces that help both parties prepare for court fairly. Technology can help parties prepare for court, but also explore off-ramps like mediation.
  • Mediation-oriented AI prompts and conflict-resolution scaffolding. New tools could have narrative builders that let people explain their story or make requests without hostility. AI prompts and assistants could calibrate to reduce conflict, not intensify it.
  • Design research that prioritizes relational harm and trauma awareness.

This is not just a legal problem. It’s a human problem — about communication, trust, and fairness. Interventions here also need to think about parties that are not directly involved in the conflict (like the children in a family law dispute between separating spouses).

3. Person vs. Bureaucracy

Finally, we have a third kind of justice issue — one that’s not so adversarial. Here, a person is simply trying to navigate a complex system to claim a right or access a service.

These kinds of conflicts might be:

  • Applying for public benefits, or appealing a denial
  • Dealing with a traffic ticket
  • Restoring a suspended driver’s license
  • Paying off fines or clearing a record
  • Filing taxes or appealing a tax decision
  • Correcting an error on a government file
  • Getting work authorization or housing assistance

There’s no opposing party. Just forms, deadlines, portals, and rules that seem designed to trip you up. People fall through the cracks because they don’t know what to do, can’t track all the requirements, or don’t have the documents ready. It’s not a courtroom battle. It’s a maze.

Here many of the people caught in these systems do have rights and options. They just don’t know it. Or they can’t get through all the procedural hoops to claim them. It’s a quiet form of injustice — made worse by fragmented service systems and hard-to-reach agencies.

What Person vs. Bureaucracy Conflicts Need

For people vs. bureaucracy conflicts, the key word is navigation. People need supportive, clarifying tools that coach and guide them through the process — and that might also make the process simpler to begin with.

  • Seamless navigation tools that walk people through every step. These could be digital co-pilots that walk people through complex government workflows, and keep them knowledgeable and encouraged at each step.
  • Clear eligibility screeners and document checklists. These could be intake simplification tools that flag whether the person is in the right place, and sets expectations about what forms someone needs and when.
  • Text-based reminders and deadline alerts, to keep people on top of complicated and lengthy processes. These procedural coaches can keep people from ending up in endless continuances or falling off the process altogether. Personal timelines and checklists can track each step and provide nudges.
  • Privacy-respecting data sharing so users don’t have to “start over” every time. This could mean administrative systems that have document collection & data verication systems that gather and store proofs (income, ID, residence) that people need to supply over and again. It could also mean bringing their choices and details among trusted systems, so they don’t need to fill in another form.

This space is ripe for good technology. But it also needs regulatory design and institutional tech improvements, so that systems become easier to plug into — and easier to fix. Aside from user-facing designs, we also need to work on standardizing forms, moving from form-dependencies to structured data, and improve the tech operations of the systems.

Why These Distinctions Matter

These three types of justice problems are different in form, in emotional tone, and in what people need to succeed. That means we need to study them differently, run stakeholder sessions differently, evaluate them with slightly different metrics, and employ different design patterns and principles.

Each of these problem types requires a different kind of solution and ideal outcome.

  • In David vs. Goliath, we need defense, protection, and fairness. We need to help reduce the massive imbalance in knowledge, capacity, and relationships, and ensure everyone can have their fair day in court.
  • In Person vs. Person, we need resolution, dignity, and de-escalation. We need to help people focus on mutually agreeable, sustainable resolutions to their problems with each other.
  • In Person vs. Bureaucracy, we need clarity, speed, and guided action. We must aim for seamless, navigable, efficient systems.

Each type of problem requires different work by researchers, designers, an policymakers. These include different kinds of:

  • User research methods, and ways to bring stakeholders together for collaborative design sessions
  • Product and service designs, and the patterns of tools, interfaces, and messages that will engage and serve users in this conflict.
  • Evaluation criteria, about what success looks like
  • AI safety guidelines, about how to prevent bias, capture, inaccuracies, and other possible harms. We can expect these 3 different conflicts changing as more AI usage appears among litigants, lawyers, and court systems.

If we blur these lines, we risk building one-size-fits-none tools.

How might the coming wave of AI in the legal system affect these 3 different kinds of Access to Justice problems?

Toward Smarter Justice Innovation

At the Legal Design Lab, we believe this three-type framework can help researchers, funders, courts, and technologists build smarter interventions — and avoid repeating old mistakes.

We can still learn across boundaries. For example:

  • How conflict resolution tools from family law might help in small business disputes
  • How navigational tools in benefits access could simplify court prep
  • How due process protections in eviction can inform other administrative hearings

But we also need to be honest: not every justice problem is built the same. And not every innovation should look the same.

By naming and studying these three zones of access to justice problems, we can better target our interventions, avoid unintended harm, and build systems that actually serve the people who need them most.

Categories
AI + Access to Justice Current Projects

Justice AI Co-Pilots

The Stanford Legal Design Lab is proud to announce a new initiative funded by the Gates Foundation that aims to bring the power of artificial intelligence (AI) into the hands of legal aid professionals. With this new project, we’re building and testing AI systems—what we’re calling “AI co-pilots”—to support legal aid attorneys and staff in two of the most urgent areas of civil justice: eviction defense and reentry debt mitigation.

This work continues our Lab’s mission to design and deploy innovative, human-centered solutions that expand access to justice, especially for those who face systemic barriers to legal support.

A Justice Gap That Demands Innovation

Across the United States, millions of people face high-stakes legal problems without any legal representation. Eviction cases and post-incarceration debt are two such areas, where legal complexity meets chronic underrepresentation—leading to outcomes that can reinforce poverty, destabilize families, and erode trust in the justice system.

Legal aid organizations are often the only line of defense for people navigating these challenges, but these nonprofits are severely under-resourced. These organizations are on the front lines of help, but often are stretched thin with staffing, tech, and resources.

The Project: Building AI Co-Pilots for Legal Aid Workflows

In collaboration with two outstanding legal aid partners—Legal Aid Foundation of Los Angeles (LAFLA) and Legal Aid Services of Oklahoma (LASO)—we are designing and piloting four AI co-pilot prototypes: two for eviction defense, and two for reentry debt mitigation.

These AI tools will be developed to assist legal aid professionals with tasks such as:

  • Screening and intake
  • Issue spotting and triage
  • Drafting legal documents
  • Preparing litigation strategies
  • Interpreting complex legal rules

Rather than replacing human judgment, these tools are meant to augment legal professionals’ work. The aim is to free up time for higher-value legal advocacy, enable legal teams to take on more clients, and help non-expert legal professionals assist in more specialized areas.

The goal is to use a deliberate, human-centered process to first identify low-risk, high-impact tasks for AI to do in legal teams’ workflows, and then to develop, test, pilot, and evaluate new AI solutions that can offer safe, meaningful improvements to legal service delivery & people’s social outcomes.

Why Eviction and Reentry Debt?

These two areas were chosen because of their widespread and devastating impacts on people’s housing, financial stability, and long-term well-being.

Eviction Defense

Over 3 million eviction lawsuits are filed each year in the U.S., with the vast majority of tenants going unrepresented. Without legal advocacy, many tenants are unaware of their rights or defenses. It’s also hard to fill in the many complicated legal documents required to participate in they system, protect one’s rights, and avoid a default judgment. This makes it difficult to negotiate with landlords, comply with court requirements, and protect one’s housing and money.

Evictions often happen in a matter of weeks, and with a confusing mix of local and state laws, it can be hard for even experienced attorneys to respond quickly. The AI co-pilots developed through this project will help legal aid staff navigate these rules and prepare more efficiently—so they can support more tenants, faster.

Reentry Debt

When people return home after incarceration, they often face legal financial obligations that can include court fines, restitution, supervision fees, and other penalties. This kind of debt can make it hard for a person to get to stability with housing, employment, driver’s licenses, and family.

According to the Brennan Center for Justice, over 10 million Americans owe more than $50 billion in reentry-related legal debt. Yet there are few tools to help people navigate, reduce, or resolve these obligations. By working with LASO, we aim to prototype tools that can help legal professionals advise clients on debt relief options, identify eligibility for fee waivers, and support court filings.

What Will the AI Co-Pilots Actually Do?

Each AI co-pilot will be designed for real use in legal aid organizations. They’ll be integrated into existing workflows and tailored to the needs of specific roles—like intake specialists, paralegals, or staff attorneys. Examples of potential functionality include:

  • Summarizing client narratives and flagging relevant legal issues
  • Filling in common forms and templates based on structured data
  • Recommending next steps based on jurisdictional rules and case data
  • Generating interview questions for follow-up conversations
  • Cross-referencing legal codes with case facts

The design process will be collaborative and iterative, involving continuous feedback from attorneys, advocates, and technologists. We will pilot and evaluate each tool rigorously to ensure its effectiveness, usability, and alignment with legal ethics.

Spreading the Impact

While the immediate goal is to support LAFLA and LASO, we are designing the project with national impact in mind. Our team plans to publish:

  • Open-source protocols and sample workflows
  • Evaluation reports and case studies
  • Responsible use guidelines for AI in legal aid
  • Collaboration pathways with legal tech vendors

This way, other legal aid organizations can replicate and adapt the tools to their own contexts—amplifying the reach of the project across the U.S.

“There’s a lot of curiosity in the legal aid field about AI—but very few live examples to learn from,” Hagan said. “We hope this project can be one of those examples, and help the field move toward thoughtful, responsible adoption.”

Responsible AI in Legal Services

At the Legal Design Lab, we know that AI is not a silver bullet. Tools must be designed thoughtfully, with attention to risks, biases, data privacy, and unintended consequences.

This project is part of our broader commitment to responsible AI development. That means:

  • Using human-centered design
  • Maintaining transparency in how tools work and make suggestions
  • Prioritizing data privacy and user control
  • Ensuring that tools do not replace human judgment in critical decisions

Our team will work closely with our legal aid partners, domain experts, and the communities served to ensure that these tools are safe, equitable, and truly helpful.

Looking Ahead

Over the next two years, we’ll be building, testing, and refining our AI co-pilots—and sharing what we learn along the way. We’ll also be connecting with national networks of eviction defense and reentry lawyers to explore broader deployment and partnerships.

If you’re interested in learning more, getting involved, or following along with project updates, sign up for our newsletter or follow the Lab on social media.

We’re grateful to the Gates Foundation for their support, and to our partners at LAFLA and LASO for their leadership, creativity, and deep dedication to the clients they serve.

Together, we hope to demonstrate how AI can be used responsibly to strengthen—not replace—the critical human work of legal aid.

Categories
AI + Access to Justice Current Projects

AI + Access to Justice Summit 2024

On October 17 and 18, 2024 Stanford Legal Design Lab hosted the first-ever AI and Access to Justice Summit.

The Summit’s primary goal was to build strong relationships and a national, coordinated roadmap of how AI can responsibly be deployed and held accountable to close the justice gap.

AI + A2J Summit at Stanford Law School

Who was at the Summit?

Two law firm sponsors, K&L Gates and DLA Piper, supported the Summit through travel scholarships, program costs, and strategic guidance.

The main group of invitees were frontline legal help providers at legal aid groups, law help website teams, and the courts. We know they are key players in deciding what kinds of AI should and could be impactful for closing the justice gap. They’ll also be key partners in developing, piloting, and evaluating new AI solutions.

Key supporters and regional leaders from bar foundations, philanthropies, and pro bono groups were also invited. Their knowledge about funding, scaling, past initiatives, and spreading projects from one organization and region to others was key to the Summit.

Technology developers also came, both from big technology companies like Google and Microsoft and legal technology companies like Josef, Thomson Reuters, Briefpoint, and Paladin. Some of these groups already have AI tools for legal services, but not all of them have focused in on access to justice use cases.

In addition, we invited researchers who are also developing strategies for responsible, privacy-forward, efficient ways of developing specialized AI solutions that could help people in the justice sphere, and also learn from how AI is being deployed in parallel fields like in medicine or mental health.

Finally, we had participants who work in regulation and policy-making at state bars, to talk about policy, ethics, and balancing innovation with consumer protection. The ‘rules of the road’ about what kinds of AI can be built and deployed, and what standards they need to follow, are essential for clarity and predictability among developers.

What Happened at the Summit?

The Summit was a 2-day event, split intentionally into 5 sections:

  • Hands-On AI Training: Examples and Research to upskill legal professionals. There were demo’s, explainers, and strategies about what AI solutions are already in use or possible for legal services. Big tech, legal tech, and computer science researchers presented participants with hands-on, practical, detailed tour of AI tools, examples, and protocols that can be useful in developing new solutions to close the justice gap.
  • Big Vision: Margaret Hagan and Richard Susskind opened up the 2nd day with a challenge: where does the access to justice community want to be in 2030 when it comes to AI and the justice gap? How can individual organizations collaborate, build common infrastructure, and learn from each other to reach our big-picture goals?
  • AI+A2J as of 2024: In the morning of the second day, two panels presented on what is already happening in AI and Access to Justice — including an inventory of current pilots, demo’s of some early legal aid chatbots, regulators’ guidelines, and innovation sandboxes. This can help the group all understand the early-stage developments and policies.
  • Design & Development of New Initiatives. In the afternoon of the second day, we led breakout design workshops on specific use cases: housing law, immigration law, legal aid intake, and document preparation. The diverse stakeholders worked together using our AI Legal Design workbook to scope out a proposal for a new solution — whether that might mean building new technology or adapting off-the-shelf tech to the needs.
  • Support & Collaboration. In the final session, we heard from a panel who could talk through support: financial support, pro bono partnership support, technology company licensing and architecture support, and other ways to build more new interdisciplinary relationships that could unlock the talent, strategy, momentum, and finances necessary to make AI innovation happen. We also discussed support around evaluation so that there could be more data and more feeling of safety in deploying these new tools.

Takeaways from the Summit

The Summit built strong relationships & common understanding among technologists, providers, researchers, and supporters. Our hope is that we can run the Summit annually, to track progress in tackling the justice gap with AI and to observe what progress has been made, year-to-year. It is also to see the development of these relationships, collaborations, and scaling of impact.

In addition, some key points emerged from the training, panels, workshops, and down-time discussions.

Common Infrastructure for AI Development

Though many AI pilots are going to have be local to a specific organization in a specific region, the national (or international) justice community can be working on common resources that can serve as infrastructure to support AI for justice.

  • Common AI Trainings: Regional leaders, who are newly being hired by state bars and bar foundations to train and explore how AI can fit with legal services, should be working together to develop common training, common resources, and common best practices.
  • Project Repository: National organizations and networks should be thinking about a common repository of projects. This inventory could track what tech provider is being used, what benchmark is being used for evaluation, what AI model is being deployed, what date it was fine-tuned on, and if and how others could replicate it.
  • Rules of the Road Trainings. National organizations and local regulators could give more guidance to leadership like legal aid executive directors about what is allowed or not allowed, what is risky or safe, or other clarification that can help more leadership be brave and knowledgeable about how to deploy AI responsibly. When is an AI project sufficiently tested to be released to the public? How should the team be maintaining and tracking an AI project, to ensure it’s mitigating risk sufficiently?
  • Public Education. Technology companies, regulators, and frontline providers need to be talking more about how to make sure that the AI that is already out there (like ChatGPT, Gemini, and Claude) is reliable, has enough guardrails, and is consumer-safe. More research needs to be done on how to encourage strategic caution among the public, so they can use the AI safely and avoid user mistakes with it (like overreliance or misunderstanding).
  • Regulators<->Frontline Providers. More frontline legal help providers need to be in conversation with regulators (like bar associations, attorneys general, or other state/federal agencies) to talk about their perspective on if and how AI can be useful in closing the justice gap. Their perspective on risks, consumer harms, opportunities, and needs from regulators can ensure that rules are being set to maximize positive impact and minimize consumer harm & technology chilling.
  • Bar Foundation Collaboration. Statewide funders (especially bar foundations) can be talking to each other about their funding, scaling, and AI strategies. Well-resourced bar foundations can share how they are distributing money, what kinds of projects they’re incentivizing, how they are holding the projects accountable, and what local resources or protocols they could share with others.

AI for Justice Should be Going Upstream & Going Big

Richard Susskind charged the group with thinking big about AI for justice. His charges & insights inspired many of the participants throughout the Summit, particularly on two points.

Going Big. Susskind called on legal leaders and technologists not to do piecemeal AI innovation (which might well be the default pathway). Rather, he called on them to work in coordination across the country (if not the globe). The focus should be on reimagining how to use AI as a way to make a fundamental, beneficial shift in justice services. This means not just doing small optimizations or tweaks, but shifting the system to work better for users and providers.

Susskind charged us with thinking beyond augmentation to models of serving the public with their justice needs.

Going Upstream. He also charged us with going upstream, figuring out more early ways to spot and get help to people. This means not just adding AI into the current legal aid or court workflow — but developing new service offerings, data links, or community partnerships. Can we prevent more legal problems by using AI before a small problem spirals into a court case or large conflict?

After Susskind’s remarks, I focused in on coordination among legal actors across the country for AI development. Compared to the last 20 years of legal technology development, are there ways to be more coordinated, and also more focused on impact and accountability?

There might be strategic leaders in different regions of the US and in different issue areas (housing, immigration, debt, family, etc) that are spreading

  • best practices,
  • evaluation protocols and benchmarks,
  • licensing arrangements with technology companies
  • bridges with the technology companies
  • conversations with the regulators.

How can the Access to Justice community be more organized so that their voice can be heard as

  • the rules of the road are being defined?
  • technology companies are building and releasing models that the public is going to be using?
  • technology vendors decide if and how they are going to enter this market, and what their pricing and licensing are going to look like?

Ideally, legal aid groups, courts, and bars will be collaborating together to build AI models, agents, and evaluations that can get a significant number of people the legal help they need to resolve their problems — and to ensure that the general, popular AI tools are doing a good job at helping people with their legal problems.

Privacy Engineering & Confidentiality Concerns

One of the main barriers to AI R&D for justice is confidentiality. Legal aid and other help providers have a duty to keep their clients’ data confidential, which restricts their ability to use past data to train models or to use current data to execute tasks through AI. In practice, many legal leaders are nervous about any new technology that requires client data — -will it lead to data leaks, client harms, regulatory actions, bad press or other concerning outcomes?

Our technology developers and researchers had cutting-edge proposals for privacy-forward AI development, that could deal with some of these concerns around confidentiality. THough these privacy engineering strategies are foreign to many lawyers, the technologists broke them down into step-by-step explanations with examples, to help more legal professionals be able to think about data protection in a systematic, engineering way.

Synthetic Data. One of the privacy-forward strategies discussed was synthetic data. With this solution, a developer doesn’t use real, confidential data to train a system. Rather, they create a parallel but fictional set of data — like a doppelganger to the original client data. It’s structurally similar to confidential client data, but it contains no real people’s information. Synthetic data is a common strategy in healthcare technology, where there is a similar emphasis on patient confidentiality.

Neel Guha explained to the participants how synthetic data works, and how they might build a synthetic dataset that is free of identifiable data and does not violate ethical duties to confidentiality. He emphasized that the more legal aid and court groups can develop datasets that are share-able to researchers and the public, the more that researchers and technologists will be attracted to working on justice-tech challenges. More synthetic datasets will both be ethically safe & beneficial to collaboration, scaling, and innovation.

Federated Model Training. Another privacy/confidentiality strategy was Federated Model Training. Google DeepMind team presented on this strategy, taking examples from the health system.

When multiple hospitals all wanted to work on the same project: training an AI model to better spot tuberculosis or other issues on lung X-rays. Each hospital wanted to train the AI model on their existing X-ray data, but they did not want to let this confidential data to leave their servers and go to a centralized server. Sharing the data would break their confidentiality requirements.

So instead, the hospitals decided to go with a Federated Model training protocol. Here, an original, first version of the AI model was taken from the centralized server and then put on each of the hospital’s localized servers. The local version of the AI model would look at that hospital’s X-ray data and train the model on them. Then they would send the model back to the centralized server and accumulate all of the learnings and trainings to make a smart model in the center. The local hospital data was never shared.

In this way, legal aid groups or courts could explore making a centralized model while still keeping each of their confidential data sources on their private, secure servers. Individual case data and confidential data stay local on the local servers, and the smart collective model lives at a centralized place and gradually gets smarter. This technique can also work for training the model over time so that the model can continue to get smart as the information and data continue to grow.

Towards the Next Year of AI for Access to Justice

The Legal Design Lab team thanks all of our participants and sponsors for a tremendous event. We learned so much and built new relationships that we look forward to deepening with more collaborations & projects.

We were excited to hear frontline providers walk away with new ideas, concrete plans for how to borrow from others’ AI pilots, and an understanding of what might be feasible. We were also excited to see new pro bono and funding relationships develop, that can unlock more resources in this space.

Stay tuned as we continue our work on AI R&D, evaluation, and community-building in the access to justice community. We look forward to working towards closing the justice gap, through technology and otherwise!

Categories
AI + Access to Justice Current Projects

AI+A2J Research x Practice Seminar

The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.

At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.

We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.

Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.

Categories
AI + Access to Justice Current Projects

AI & Legal Help at Codex FutureLaw

At the April 2024 Stanford Codex FutureLaw Conference, our team at Legal Design Lab both presented the research findings about users’ and subject matter experts’ approaches to AI for legal help, and to lead a half-day interdisciplinary workshop on what future directions are possible in this space.

Many of the audience members in both sessions were technologists interested in the legal space, who are not necessarily familiar with the problems and opportunities for legal aid groups, courts, and people with civil legal problems. Our goal was to help them understand the “access to justice” space and spot opportunities to which their development & research work could relate.

Some of the ideas that emerged in our hands-on workshop included the following possible AI + A2J innovations:

AI to Scan Scary Legal Documents

Several groups identified that AI could help a person, who has received an intimidating legal document — a notice, a rap sheet, an immigration letter, a summons and complaint, a judge’s order, a discovery request, etc. AI could let them take a picture of the document, synthesize the information, present it back with a summary of what it’s about, what important action items are, and how to get started on dealing with it.

It could make this document interactive through FAQs, service referrals, or a chatbot that lets a person understand and respond to it. It could help people take action on these important but off-putting documents, rather than avoid them.

Using AI for Better Gatekeeping of Eviction Notices & Lawsuits

One group proposed that a future AI-powered system could screen possible eviction notices or lawsuit filings, to check if the landlord or property manager has fulfilled all obligations and m

  • Landlords must upload notices.
  • AI tools review the notice: is it valid? have they done all they can to comply with legal and policy requirements? is there any chance to promote cooperative dispute resolution at this early stage?
  • If the AI lives at the court clerk level, it might help court staff better detect errors, deficiencies, and other problems that better help them allocate limited human review.

AI to empower people without lawyers to respond to a lawsuit

In addition, AI could help the respondent (tenant) prepare their side, helping them to present evidence, prep court documents, understand court hearing expectations, and draft letters or forms to send.

Future AI tools could help them understand their case, make decisions, and get work product created with little burden.

With a topic like child support modification, AI could help a person negotiate a resolution with the other party, or do a trial run to see how a possible negotiation might go. It could also change their tone, to take a highly emotional negotiation request and transform it to be more likely to get a positive, cooperative reply from the other party.

AI to make Legal Help Info More Accessible

Another group proposed that AI could be integrated into legal aid, law library, and court help centers to:

  • Better create and maintain inter-organization referrals, so there are warm handoffs and not confusing roundabouts when people seek help
  • Clearer, better maintained, more organized websites for a jurisdiction, with the best quality resources curated and staged for easy navigation
  • Multi-modal presentations, to make information available in different visual presentations and languages
  • Providing more information in speech-to-text format, conversational chats, and across different dialects. This was especially highlighted in immigration legal services.

AI to upskill students & pro bono clinics

Several groups talked about AI for training and providing expert guidance to staff, law students, and pro bono volunteers to improve their capacity to serve members of the public.

AI tools could be used in simulations to better educate people in a new legal practice area, and also supplement their knowledge when providing services. Expert practitioners can supply knowledge to the tools, that can then be used by novice practitioners so that they can provide higher-quality services more efficiently in pro bono or law student clinics.

AI could also be used in community centers or other places where community justice workers operate, to get higher quality legal help to people who don’t have access to lawyers or who do not want to use lawyers.

AI to improve legal aid lawyers’ capacity

Several groups proposed AI that could be used behind-the-scenes by expert legal aid or court help lawyers. They could use AI to automate, draft, or speed up the work that they’re already doing. This could include:

  • Improving intake, screening, routing, and summaries of possible incoming cases
  • Drafting first versions of briefs, forms, affidavits, requests, motions, and other legal writing
  • Documenting their entire workflow & finding where AI can fit in.

Cross-Cutting action items for AI+ A2J

Across the many conversations, some common tasks emerged that cross different stakeholders and topics.

Reliable AI Benchmarks:

We as a justice community need to establish solid benchmarks to test AI effectiveness. We can use these benchmarks to focus on relevant metrics.

In addition, we need to regularly report on and track AI performance at different A2J tasks.

This can help us create feedback loops for continuous improvement.

Data Handling and Feedback:

The community needs reliable strategies and rules for how to do AI work that respects obligations for confidentiality and privacy.

Can there be more synthetic datasets that still represent what’s happening in legal aid and court practice, so they don’t need to share actual client information to train models?

Can there be better Personally Identifiable Information (PII) redaction for data sharing?

Who can offer guidance on what kinds of data practices are ethical and responsible?

Low-Code AI Systems:

The justice community is never going to have large tech, data, or AI working groups within their legal aid or court organization. They are going to need low-code solutions that will let them deploy AI systems, fine-tune them, and maintain them without a huge technical requirement.

Overall, the presentation, Q&A, and workshop all pointed to enthusiasm for responsible innovation in the AI+A2J space. Tech developers, legal experts, and strategists are excited about the opportunity to improve access to justice through AI-driven solutions, and enhance efficiency and effectiveness in legal aid. With more brainstormed ideas for solutions in this space, now it is time to move towards R&D incubation that can help us understand what is feasible and valuable in practice.

Categories
AI + Access to Justice Current Projects

User Research Workshop on AI & A2J

In December 2023, our lab hosted a half-day workshop on AI for Legal Help.

Our policy lab class of law students, master students, and undergraduates presented their user research findings from their September through December research.

Our guests, including those from technology companies, universities, state bars, legal aid groups, community-based organizations, and advocacy/think takes, all worked together in break-out sessions to tackle some of the big policy and legal opportunities around AI in the space.

We thank our main class partners, the Technology Initiative Grant team from the Legal Services Corporation, for assisting us with the direction and main feedback to our class user research work.

Categories
AI + Access to Justice Current Projects

Schedule for AI & A2J Jurix workshop

Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions.

We are pleased to announce our final schedule for the workshop:

Schedule for the AI & A2J Workshop

Morning Sessions

Welcome Kickoff, 09:00-09:15

Conference organizers welcome everyone, lead introductions, and review the day’s plan.

1: AI-A2J in Practice, 09:15-10:30 AM 

09:15-09:30: Juan David Gutierrez: AI technologies in the judiciary: Critical appraisal of LLMs in judicial decision making

09:30-09:45: Ransom Wydner, Sateesh Nori, Eliza Hong, Sam Flynn, and Ali Cook: AI in Access to Justice: Coalition-Building as Key to Practical and Sustainable Applications

09:45-10:00: Mariana Raquel Mendoza Benza: Insufficient transparency in the use of AI in the judiciary of Peru and Colombia: A challenge to digital transformation

10:00-10:15: Vanja Skoric, Giovanni Sileno, and Sennay Ghebreab: Leveraging public procurement for LLMs in the public sector: Enhancing access to justice responsibly

10:15-10:30: Soumya Kandukuri: Building the AI Flywheel in the American Judiciary

Break: 10:30-11:00 

2: AI for A2J Advice, Issue-Spotting, and Engagement Tasks, 11:00-12:30 

11:00: Opening remarks to the session

11:05-11:20: Sam Harden: Rating the Responses to Legal Questions by Generative AI Models

11:20-11:35: Margaret Hagan: Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories

11:35-11:50: Nick Goodson and Rongfei Lui: Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process

11:50-12:05: Nina Toivonen, Marika Salo-Lahti, Mikko Ranta, and Helena Haapio, Beyond Debt: The Intersection of Justice, Financial Wellbeing and AI

12:05-12:15: Amit Haim: Large Language Models and Legal Advice12:15-12:30: General Discussions, Takeaways, and Next Steps on AI for Advice

Break: 12:30-13:30

Afternoon Sessions

3: AI for Forms, Contracts &  Dispute Resolution, 13:30-15:00 

13:30: Opening remarks to this session13:35-13:50: Quinten Steenhuis, David Colarusso, and Bryce Wiley: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications

13:50-14:05: Katie Atkinson, David Bareham, Trevor Bench-Capon, Jon Collenette, and Jack Mumford: Tackling the Backlog: Support for Completing and Validating Forms

14:05-14:20: Anne Ketola, Helena Haapio, and Robert de Rooy: Chattable Contracts: AI Driven Access to Justice

14:20-14:30: Nishat Hyder-Rahman and Marco Giacalone: The role of generative AI in increasing access to justice in family (patrimonial) law

14:30-15:00: General Discussions, Takeaways, and Next Steps on AI for Forms & Dispute Resolution

Break: 15:00-15:30

4:  AI-A2J Technical Developments, 15:30-16:30

15:30: welcome to session
15:35-15:50: Marco Billi, Alessandro Parenti, Giuseppe Pisano, and Marco Sanchi: A hybrid approach of accessible legal reasoning through large language models
15:50-16:05: Bartosz Krupa – Polish BERT legal language model
16:05-16:20: Jakub Dråpal – Understanding Criminal Courts
16:20-16:30: General Discussion on Technical Developments in AI & A2J

Closing Discussion: 16:30-17:00

What are the connections between the sessions? What next steps do participants think will be useful? What new research questions and efforts might emerge from today?

Categories
AI + Access to Justice Current Projects

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.

There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).

See more about the conference at the main JURIX 23 website.

About the AI & A2J workshop

This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.

Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.

What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?

At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:

  • Identify the key challenges and opportunities for using AI to improve access to justice.
  • Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
  • Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
  • Develop proposals for how to hold AI models accountable for the public good.

Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.

Organizers of the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).

Are you generally interested in AI & Access to Justice? Sign up for our Stanford Legal Design Lab AI-A2J interest list to stay in touch.

Submit a paper to the AI & A2J Workshop

We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.

Workshop submissions may be about the topics described above, including:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Deadline extended to November 20, 2023

Submission Link: Submit your 4-12 page paper here: https://easychair.org/my/conference?conf=jurixaia2j

Notification: November 28, 2023

Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands

More about the JURIX Conference

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.

The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.


Categories
Current Projects

Paths Toward Access to Justice at Scale presentation

In October 2023, Margaret Hagan presented at the International Access to Justice Forum, on “Paths toward Access to Justice at Scale”. The presentation covered the preliminary results of stakeholder interviews she is conducting with justice professionals across the US about how best to scale one-off innovations and new ideas for improvements, to become more sustainable and impactful system changes.

The abstract

Pilots to increase access to justice are happening in local courts, legal aid groups, government agencies, and community groups around the globe. These innovative new local services, technologies, and policies aim to build people’s capability, reduce barriers to access, and improve the quality of justice people receive. They are often built with an initial short-term investment, to design the pilot and run it for a period. Most of them lack a clear plan to scale up to a more robust iteration, or spread to other jurisdictions, or sustain the program past the initial investment. This presentation presents a framework of theories of change for the justice system, and stakeholders’ feedback on how to use them for impact.

The research on Access to Justice long-term strategies

The presentation covered the results of the qualitative, in-depth interviews with 11 legal aid lawyers, court staff members, legal technologists, funders, and statewide justice advocates about their work, impact, and long-term change.

The research interviews asked these professionals about their long-term, systematic theories of change — and to rate other theories of change that others have mentioned. They were asked about past projects they’ve run, how they have made an impact (or not), and what they have learned from their colleagues about what makes a particular initiative more impactful, sustainable, and successful.

The goal of the research interviews was to gather the informal knowledge that various professionals have gathered over years of work in reforming the justice system and improving people’s outcomes when they experience legal problems.

This knowledge often circulates casually at meetings, dinners, and over email, but is not often laid out explicitly or systematically. It was also to encourage reflection among practitioners, to move from a focus just on day-to-day work to long-term impact.

Stay tuned for more publications about this research, as the interviews & synthesis continue.

Categories
Background

Simple at the front, Smart at the back: design for access to justice innovation

A colleague working on improving the legal system in New Zealand from a user-centered design perspective mentioned this phrase to me in a recent email: Simple at the Front, Smart at the Back. Now it’s my constant refrain.

What does it mean? That when we build tools, guides, explainers, or anything else for laypeople to deal with the legal system, they should be simple, intuitive & clear for the person. But this interface should be covering up a very intelligent and robust complex system. We aren’t actually ‘simplfiying’ so much as improving the user experience of the system through tools that make people feel that navigating the system is simple. Using the power of coordinated, interactive, smart technology (and planning) we can make complex systems seem simple.


Next week I am convening a working group on how we can make the internet a better place for legal help.
This is one of the three main themes of work this year at my Legal Design Lab at Stanford. The main question is how we can make it incredibly easy for a lay person who is using Google or the starting point to find legal info, to get them from their Google search to essential information about their situation, tools and information to help them make smart decisions, and then actually follow through on them.

My goal is to promote more coordination among the many organizations that currently (and could possibly) provide these online services right now.

These are not a standard group of people. Some of them are non-legal — search engine providers like Google, Microsoft, Yahoo, or referral services like 211 & United Way. Others are legal help site maintainers, or courts, or the attorney general office, or consumer law companies, or startups, or legal aid groups, or non-profits, or legal publishers. So many different entities, all potentially with value to offer a person on their journey from “Do I Have a Legal Problem?” to “I Got My Problem Resolved.”

The future I see is one of coordination of all these various service-providers so that a layperson in search of help can journey between their services seamlessly — with all their personal information in tow, knowing how to get from one provider to the next, not getting confused or lost after one service’s offering ends, and having the different providers help her along to resolution.

To get to this seamless user journey & enjoyable online user experience, it might be tempting to call for a Mega Portal. This would be the one amazing, well-designed, user-friendly, comprehensive website that can get a person from problem-query to answers, appointments, and tools.

This is a pie-in-the-sky dream, and also not that close to what different people actually want. There will not be the one almighty legal site that everyone in the country will go to. The closest we will get on that front is Google. There is simply no strong enough legal brand or comprehensive-enough resource that can cover all the many touchpoints a person must go through in a typical legal process.

Rather than bank on this one almighty Mega Portal, I see a future where we can build a whole lot of different ways to access this coordinated legal system. It would be full of different apps, websites, in-person stations, court and self-help kiosks, text messaging systems — and whatever new technology is coming our way in the next decades. We don’t need one perfect portal, we need a rich ecosystem of new & cutting-edge tools that can get different kinds of people to the same legal help.

Better internet for legal help coordinated system w many doors

And to achieve this vision, what’s the key thing — the first step? It sure doesn’t sound sexy, but I am increasingly obsessed with it: Data Standards. APIs. Getting all these different kinds of Internet-based legal service providers to make sure that their systems can talk to each other, that they accessibly present their databases of information about what legal help exists, who can access what, what rules apply to whom, what documents to submit, where to submit them, and on & on….

We can’t afford to live in the current siloed & proprietary world, where vendors and agencies hoard data & don’t talk to other service-providers. Where groups build a legal services tool and then don’t let this tool talk to other tools or pass a user over to next steps on her journey. We need to set requirements that providers must coordinate with each other, for the sake of the legal user. The layperson’s attempts to find help are going to be disjointed, confusing, and frustrating to the point of giving up if all these different providers don’t talk to each other, and help the person go from an Internet search to a trustworthy, supportive path of legal help.

Now’s the time to be investing in the infrastructure of the Internet for Legal Help. It will make Google search results better — surfacing jurisdiction-correct and public help resources, instead of spammy and irrelevant hits. It will allow for a new generation of legal tools to be built by young entrepreneurs and lawyers, who have ideas for better ways to present legal resources, or who just want to experiment in this space. It will foster innovation in access to justice, by giving a solid backbone of content and resources for innovators to draw from and a network of other tools to link to.

That’s how we’re going to get to “Simple in the Front and Smart in the Back” — with investment in coordinating the underlying system of how service providers present their info & share it with other providers, and by opening up the consumer law/access to justice space for more experimentation and creation.