Categories
AI + Access to Justice Current Projects

Jurix 2025 AIDA2J Workshop

The Stanford Legal Design Lab is so happy to be a sponsoring co-host of the third consecutive AI and Access to Justice workshop at the JURIX conference. This round, the conference is at Turin, Italy in December 2025. The theme is AI, Dispute Resolution, and Access to Justice.

See the main workshop website here.

The workshop will involve the collaboration of the Suffolk LIT Lab, the Stanford Legal Design Lab, the Maastricht Law and Tech LabLibra.lawVrije Universiteit Brussel, Swansea University, and the University of Turin, and will be part of the larger Jurix 2025 conference hosted this year in Italy. 

The workshop will focus on three topics: 

  • Data issues related to access to justice (building reusable, sharable datasets for research) 
  • AI for access to justice generally 
  • AI for dispute resolution 

The provisional schedule is as follows:

Session 1 – Interfaces and Knowledge Tools

LegalWebAgent: Empowering Access to Justice via LLM-Based Web Agents
CourtPressGER: A German Court Decision to Press Release Summarization Dataset
A Voice-First AI Service for People-Centred Justice in Niger
Designing Clarity with AI: Improving the Usability of Case Law Databases of International Courts
CaseConnect: Cross-Lingual Legal Case Retrieval with Semantic Embeddings and Structure-Aware Segmentation
Glitter: Visualizing Lexical Surprisal for Readability in Administrative Texts

Session 2 – Global AI for Legal Help, Prediction and Dispute Resolution

Understanding Rights Through AI: The Role of Legal Chatbots in Access to Justice
The Private Family Forecast: A Predictive Method for an Effective and Informed Access to Justice
Artificial Intelligence Enabled Justice Tools for Refugees in Tanzania
Artificial Intelligence and Access to Justice in Chile
AI and Judicial Transformation: Comparative Analysis of Predictive Tools in EU Labour Law
From Textual Simplification to Epistemic Justice: Rethinking Digital Dispute Resolution Through AI

Session 3 – Workflows, Frameworks and Governance of Legal AI
PLDF – A Private Legal Declarative Document Generation Framework
How the ECtHR Frames Artificial Intelligence: A Distant Reading Analysis
What Legal Help Teams and Consumers Actually Do: A Legal Help Task Taxonomy
Packaging Thematic Analysis as an AI Workflow for Legal Research
Gender Bias in LLMs: Preliminary Evidence from Shared Parenting Scenario in Czech Family Law
AI-Powered Resentencing: Bridging California’s Second-Chance Gap
AI Assistance for Court Review of Default Judgments

Interactive Workshop – Global Legal Data Availability

Categories
AI + Access to Justice Current Projects

AI+A2J Summit 2025

The Stanford Legal Design Lab hosted the second annual AI and Access to Justice Summit on November 20-21, 2025. Over 150 legal professionals, technologists, regulators, strategists, and funders came together to tackle one big question: how can we build a strong, sustainable national/international AI and Access to Justice Ecosystem?

We will be synthesizing all of the presentations, feedback, proposals and discussions into a report that lays out:

  • The current toolbox that legal help teams and users can be employing to accomplish key legal tasks like Q&A, triage and referrals, conducting intake interviews, drafting documents, doing legal research, reviewing draft documents, and more.
  • The strategies, practical steps, and methods with which to design, develop, evaluate, and maintain AI so that it is valuable, safe, and affordable.
  • Exemplary case studies of what AI solutions are being built, how they are being implemented in new service and business models, and how they might be scaled or replicated.
  • An agenda of how to encourage more coordination of AI technology, evaluation, and capability-building, so that successful solutions can be available to as many legal teams and users as possible — and have the largest positive impact on people’s housing, financial, family, and general stability.

Thank you to all of our speakers, participants, and sponsors!

Categories
AI + Access to Justice Current Projects

A Call for Statewide Legal Help AI Stewards

Shaping the Future of AI for Access to Justice

By Margaret Hagan, originally published on Legal Design & Innovation

If AI is going to advance access to justice rather than deepen the justice gap, the public-interest legal field needs more than speculation and pilots — we need statewide stewardship.

2 missions of an AI steward, for a state’s legal help service provider community

We need specific people and institutions in every state who wake up each morning responsible for two things:

  1. AI readiness and vision for the legal services ecosystem: getting organizations knowledgeable, specific, and proactive about where AI can responsibly improve outcomes for people with legal problems — and improve the performance of services. This can ensure the intelligent and impactful adoption of AI solutions as they are developed.
  2. AI R&D encouragement and alignment: getting vendors, builders, researchers, and benchmark makers on the same page about concrete needs; matchmaking them with real service teams; guiding, funding, evaluating, and communicating so the right tools get built and adopted.

Ideally, these local state stewards will be talking with each other regularly. In this way, there can be federated research & development of AI solutions for legal service providers and the public struggling with legal problems.

This essay outlines what AI + Access to Justice stewardship could look like in practice — who can play the role, how it works alongside court help centers and legal aid, and the concrete, near-term actions a steward can take to make AI useful, safe, and truly public-interest.

State stewards can help local legal providers — legal aid groups, court help centers, pro bono networks, and community justice workers — to set a clear vision for AI futures & help execute it.

Why stewardship — why now?

Every week, new tools promise to draft, translate, summarize, triage, and file. Meanwhile, most legal aid organizations and court help centers are still asking foundational questions: What’s safe? What’s high-value? What’s feasible with our staff and privacy rules? How do we avoid vendor lock-in? How do we keep equity and client dignity at the center?

Without stewardship, AI adoption will be fragmented, extractive, and inequitable. With stewardship, states can:

  • Focus AI where it demonstrably helps clients and staff. Prioritize tech based on community and provider stakeholders’ needs and preferences — not just what is being sold by vendors.
  • Prepare data and knowledge so tools work in the local contexts. Also, that they can be trained safely & benchmarked responsibly with relevant data that is masked and safe.
  • Align funders, vendors, and researchers around real service needs. So that all of these stakeholder groups, with their capacity to support, build, and evaluate emerging technology, direct this capacity at opportunities that are meaningful.
  • Develop shared evaluation and governance so we build trust, not backlash.

Who can play the Statewide AI Steward role?

“Steward” is a role, not a single job title. Different kinds of groups can carry it, depending on how your state is organized:

  • Access to Justice Commissions / Bar associations / Bar foundations that convene stakeholders, fund statewide initiatives, and set standards.
  • Legal Aid Executive Directors (or cross-org consortia) with authority to coordinate practice areas and operations.
  • Court innovation offices / judicial councils that lead technology, self-help, and rules-of-court implementations.
  • University labs / legal tech nonprofits that have capacity for research, evaluation, data stewardship, and product prototyping.
  • Regional collaboratives with a track record of shared infrastructure and implementation.

Any of these can steward. The common denominator: local trusted relationships, coordination power, and delivery focus. The steward must be able to convene local stakeholders, communicate with them, work with them on shared training and data efforts, and move from talk to action.

The steward’s two main missions

Mission 1: AI readiness + vision (inside the legal ecosystem)

The steward gets legal organizations — executive directors, supervising/managing attorneys, practice leads, intake supervisors, operations staff — knowledgeable and specific about where AI can responsibly improve outcomes. This means:

  • Translating AI into service-level opportunities (not vague “innovation”).
  • Running short, targeted training sessions for leaders and teams.
  • Co-designing workflow pilots with clear review and safety protocols.
  • Building a roadmap: which portfolios, which tools, what sequence, what KPIs.
  • Clarify ethical, privacy, and consumer/client safety priorities and strategies, to talk about risks and worries in specific, technically-informed ways that provide sufficient protection to users and orgs — and don’t fall into inaction because of ill-defined concern about risk.

The result: organizations are in charge of the change rather than passive recipients of vendor pitches or media narratives.

2) AI tech encouragement + alignment (across the supply side)

The steward gets the groups who specialize in building and evaluating technology — vendors, tech groups, university researchers, benchmarkers— pointed at the right problems with the right real-world partnerships:

  • Publishing needs briefs by portfolio (housing, reentry, debt, family, etc).
  • Matchmaking teams and vendors; structuring pilots with data, milestones, evaluation, and governance. Helping organizations choose a best-in-class vendor and then also manage this relationship with regular evaluation.
  • Contributing to benchmarks, datasets, and red-teaming so the field learns together. Build the infrastructure that can lead to effective, ongoing evaluation of how AI systems are performing.
  • Helping fund and scale what works; communicating results frankly. Ensuring that prototypes and pilots’ outcomes are shared to inform others of what they might adopt, or what changes must happen to the AI solutions for them to be adopted or scaled.

The result: useful and robust AI solutions built with frontline reality, evaluated transparently, and ready to adopt responsibly.

What Stewards Could Do Month-to-Month

I have been brainstorming specific actions that a statewide steward could do. Many of these actions could also be done in concert with a federated network of stewards.

Some of the things a state steward could do to advance responsible, impactful AI for Access to Justice in their region.

Map the State’s Ecosystem of Legal Help

Too often, we think in terms of organizations — “X Legal Aid,” “Y Court Help Center” — instead of understanding who’s doing the actual legal work.

Each state needs to start by identifying the legal teams operating within its borders.

  • Who is doing eviction defense?
  • Who helps people with no-fault divorce filings?
  • Who handles reasonable accommodation letters for tenants?
  • Who runs the reentry clinic or expungement help line?
  • Who offers debt relief letter assistance?
  • Who does restraining order help?

This means mapping not just legal help orgs, but service portfolios and delivery models. What are teams doing? What are they not doing? And what are the unmet legal needs that clients consistently face?

This is a service-level analysis — an inventory of the “market” of help provided and the legal needs not yet met.

AI Training for Leaders + Broader Legal Organizations

Most legal aid and court help staff are understandably cautious about AI. Many don’t feel in control of the changes coming — they feel like they’re watching the train leave the station without them.

The steward’s job is to change that.

  • Demystify AI: Explain what these systems are and how they can support (or undermine) legal work.
  • Coach teams: Help practice leads and service teams see which parts of their work are ripe for AI support.
  • Invite ownership: Position AI not as a threat, but as a design space — a place where legal experts get to define how tools should work, and where lawyers and staff retain the power to review and direct.

To do this, stewards can run short briefings for EDs, intake leads, and practice heads on LLM basics, use cases, risks, UPL and confidentiality, and adoption playbooks. Training aims to get them conversant in the basics of the technology and help them envision where responsible opportunities might be. Let them see real-world examples of how other legal help providers are using AI behind the scenes or directly to the public.

Brainstorm + Opportunity Mapping Workshops with Legal Teams

Bring housing teams, family law facilitator teams, reentry teams, or other specific legal teams together. Have them map out their workflows and choose which of their day-to-day tasks is AI-opportune. Which of the tasks are routine, templated, and burdensome?

As stewards run these workshops, they can be on the lookout for where legal teams in their state can build, buy, or adopt an AI solution in 3 areas.

When running AI opportunity brainstorm, it’s worth considering these 3 zones: where can we add to existing legal full-representation servivces, where can we add to brief or pro bono services, and where can we add services that legal teams don’t currently offer?

Brainstorm 1: AI Copilots for Services Legal Teams Already Offer

This is the lowest-risk, highest-benefit space. Legal teams are already helping with eviction defense, demand letters, restraining orders, criminal record clearing, etc.

Here, AI can act as a copilot for the expert — a tool that does things that the expert lawyer, paralegal, or legal secretary is already doing in a rote way:

  • Auto-generates first drafts based on intake data
  • Summarizes client histories
  • Auto-fills court forms
  • Suggests next actions or deadlines
  • Creates checklists, declarations, or case timelines

These copilots don’t replace lawyers. They reduce drudge work, improve quality, and make staff more effective.

Brainstorm 2: AI Copilots for Services That Could Be Done by Pro Bono or Volunteers

Many legal aid organizations know where they could use more help: limited-scope letters, form reviews, answering FAQs, or helping users navigate next steps.

AI can play a key role in unlocking pro bono, brief advice, and volunteer capacity:

  • Automating burdensome tasks like collecting or review database records,
  • Helping them write high-quality letters or motions
  • Pre-filling petitions and forms with data that has been gathered
  • Providing them with step-by-step guidance
  • Flagging errors, inconsistencies, or risks in drafts
  • Offering language suggestions or plain-language explanations

Think of this as AI-powered “training wheels” that help volunteers help more people, with less handholding from staff.

Brainstorm 3: AI Tools for Services That Aren’t Currently Offered — But Should Be

There are many legal problems where there is high demand, but legal help orgs don’t currently offer help because of capacity limits.

Common examples of these under-served areas include:

  • Security deposit refund letters
  • Creating demand letters
  • Filing objections to default judgments
  • Answering brief questions

In these cases, AI systems — carefully designed, tested, and overseen — can offer direct-to-consumer services that supplement the safety net:

  • Structured interviews that guide users through legal options
  • AI-generated letters/forms with oversight built in
  • Clear red flags for when human review is needed

This is the frontier: responsibly extending the reach of legal help to people who currently get none. The brainstorm might also include reviewing existing direct-to-consumer AI tools from other legal orgs, and deciding which they might want to host or link to from their website.

The steward can hold these brainstorming and prioritization sessions to help legal teams find these legal team co-pilots, pro bono tools, and new service offerings in their issue area. The stewards and legal teams can move the AI vision forward & prepare for a clear scope for what AI should be built.

Data Readiness + Knowledge Base Building

Work with legal and court teams to inventory what data they have that could be used to train or evaluate some of the legal AI use cases they have envisioned. Support them with tools & protocols by which to mask PII in this document and make it safe to use in AI R&D.

This could mean getting anonymized completed forms, documents, intake notes, legal answers, data reports, or other legal workflow items. Likely, much of this data will have to be labeled, scored, and marked up so that it’s useful in training and evaluation.

The steward can help the groups that hold this data to understand what data they hold, how to prepare it and share it, and how to mark it up with helpful labels.

Part of this is also to build a Local Legal Help Knowledge Base — not just about the laws and statutes on the books, but about the practical, procedural, and service knowledge that people need when trying to deal with a legal problem.

Much of this knowledge is in legal aid lawyers’ and court staff’s heads, or training decks and events, or internal knowledge management systems and memos.

Stewards can help these local organizations contribute this knowledge about local legal rules, procedures, timelines, forms, services, and step-by-step guides into a statewide knowledge base. This knowledge base can then be used by the local providers. It will be a key piece of infrastructure on which new AI tools and services can be built.

Adoption Logistics

As local AI development visions come together, the steward can lead on adoption logistics.

The steward can make sure that the local orgs don’t reinvent what might already exist, or spend money in a wasteful way.

They can do tool evaluations to see which LLMs and specific AI solutions perform best on the scoped tasks. They can identify researchers and evaluators to help with this. They can also help organizations procure these tools or even create a pool of multiple organizations with similar needs for a shared procurement process.

They might also negotiate beneficial, affordable licenses or access to AI tools that can help with the desired functions. They can also ensure that case management and document management systems are responsive to the AI R&D needs, so that the legacy technology systems will integrate well with the new tools.

Ideally, the steward will help the statewide group and the local orgs make smart investments in the tech they might need to buy or build — and can help clear the way when hurdles emerge.

Bigger-Picture Steward Strategies

In addition to these possible actions, statewide stewards can also follow a few broader strategies to get a healthy AI R&D ecosystem in their state and beyond.

Be specific to legal teams

As I’ve already mentioned throughout this essay, stewards should be focused on the ‘team’ level, rather than the ‘organization’ one. It’s important that they develop relationships and run activities with teams that are in charge of specific workflows — and that means the specific kind of legal problem they help with.

Stewardship should be organizing its statewide network of named teams and named services, for example,

  • Housing law teams & their workflows: hotline consults, eviction defense prep, answers, motions to set aside, trial prep, RA letters for habitability issues, security-deposit demand letters.
  • Reentry teams & their workflows: record clearance screening, fines & fees relief, petitions, supporting declarations, RAP sheet interpretation, collateral consequences counseling.
  • Debt/consumer teams & their workflows: answer filing, settlement letters, debt verification, exemptions, repair counseling, FDCPA dispute letters.
  • Family law teams & their workflows: form prep (custody, DV orders), parenting plans, mediation prep, service and filing instructions, deadline tracking.

The steward can make progress on its 2 main goals — AI readiness and R&D encouragement — if it can build a strong local network among the teams that work on similar workflows, with similar data and documents, with similar audiences.

Put ethics, privacy, and operational safeguards at the center

Stewardship builds trust by making ethics operational rather than an afterthought. This all happens when AI conversations are grounded, informed, and specific among legal teams and communities. It also happens when they work with trained evaluators, who know how to evaluate the performance of AI rigorously, not based on anecdotes and speculation.

The steward network can help by planning out and vetting common, proven strategies to ensure quality & consumer protection are designed into the AI systems. They could work on:

  • Competence & supervision protocols: helping legal teams plan for the future of expert review of AI systems, clarifying “eyes-on” review models with staff trainings and tools. Stewards can also help them plan for escalation paths, when human reviewers find problems with the AI’s performance. Stewards might also work on standard warnings, verification prompts, and other key designs to ensure that reviewers are effectively watching AI’s performance.
  • Professional ethics rules clarity: help the teams design internal policies that ensure they’re in compliance with all ethical rules and responsibilities. Stewards can also help them plan out effective disclosures and consent protocols, so consumers know what is happening and have transparency.
  • Confidentiality & privacy: This can happen at the federated/ national level. Stewards can set rules for data flows, retention, de-identification/masking — which otherwise can be overwhelming for specific orgs. Stewards can also vet vendors for security and subprocessing.
  • Accountability & Improvements: Stewards can help organizations and vendors plan for good data-gathering & feedback cycles about AI’s performance. This can include guidance on document versioning, audit logs, failure reports, and user feedback loops.

Stewards can help bake safeguards into workflows and procurement, so that there are ethics and privacy by design in the technical systems that are being piloted.

Networking stewards into a federated ecosystem

For statewide stewardship to matter beyond isolated pilots, stewards need to network into a federated ecosystem — a light but disciplined network that preserves local autonomy while aligning on shared methods, shared infrastructure, and shared learning.

The value of federation is compounding: each state adapts tools to local law and practice, contributes back what it learns, and benefits from the advances of others. Also, many of the tasks of a steward — educating about AI, building ethics and safeguards, measuring AI, setting up good procurement — will be quite similar state-to-state. Stewards can share resources and materials to implement locally.

What follows reframes “membership requirements” as the operating norms of that ecosystem and explains how they translate into concrete habits, artifacts, and results.

Quarterly check-ins become the engine of national learning. Stewards participate in a regular virtual cohort, not as a status ritual but as an R&D loop. Each session surfaces what was tried, what worked, and what failed — brief demos, before/after metrics, and annotated playbooks.

Stewards use these meetings to co-develop materials, evaluation rubrics, funding strategies, disclosure patterns, and policy stances, and to retire practices that didn’t pan out. Over time, this cadence produces a living canon of benchmarks and templates that any newcomer steward can adopt on day one.

Each year, the steward could champion at least one pilot or evaluation (for example, reasonable-accommodation letters in housing or security-deposit demand letters in consumer law), making sure it has clear success criteria, review protocols, and an exit ramp if risks outweigh benefits. This can help the pilots spread to other jurisdictions more effectively.

Shared infrastructure is how federation stays interoperable. Rather than inventing new frameworks in every state, stewards lean on common platforms for evaluation, datasets, and reusable workflows. Practically, that means contributing test cases and localized content, adopting shared rubrics and disclosure patterns, and publishing results in a comparable format.

It also means using common identifiers and metadata conventions so that guides, form logic, and service directories can be exchanged or merged without bespoke cleanup. When a state localizes a workflow or improves a safety check, it pushes the enhancement upstream, so other states can pull it down and adapt with minimal effort.

Annual reporting turns stories into evidence and standards. Each steward could publish a concise yearly report that covers: progress made, obstacles encountered, datasets contributed (and their licensing status), tools piloted or adopted (and those intentionally rejected), equity and safety findings, and priorities for the coming year.

Because these reports follow a common outline, they are comparable across states and can be aggregated nationally to show impact, surface risks, and redirect effort. They also serve as onboarding guides for new teams: “Here’s what to try first, here’s what to avoid, here’s who to call.”

Success in 12–18 months looks concrete and repeatable. In a healthy federation, we could point to a public, living directory of AI-powered teams and services by portfolio, with visible gaps prioritized for action.

  • We could have several legal team copilots embedded in high-volume workflows — say, demand letters, security-deposit letters, or DV packet preparation — with documented time savings, quality gains, and staff acceptance.
  • We could have volunteer unlocks, where a clinic or pro bono program helps two to three times more people in brief-service matters because a copilot provides structure, drafting support, and review checkers.
  • We could have at least one direct-to-public workflow launched in a high-demand, manageable-risk area, with clear disclosures, escalation rules, and usage metrics.
  • We would see more contributions to data-driven evaluation practices and R&D protocols. This could be localized guides, triage logic, form metadata, anonymized samples, and evaluation results. Or it could be an ethics and safety playbook that is not just written but operationalized in training, procurement, and audits.

A federation of stewards doesn’t need heavy bureaucracy. It could be a set of light, disciplined habits that make local work easier and national progress faster. Quarterly cohort exchanges prevent wheel-reinventing. Local duties anchor AI in real services. Shared infrastructure keeps efforts compatible. Governance protects the public-interest character of the work. Annual reports convert experience into standards.

Put together, these practices allow stewards to move quickly and responsibly — delivering tangible improvements for clients and staff while building a body of knowledge the entire field can trust and reuse.

Stewardship as the current missing piece

Our team at Stanford Legal Design Lab is aiming for an impactful, ethical, robust ecosystem of AI in legal services. We are building the platform JusticeBench to be a home base for those working on AI R&D for access to justice. We are also building justice co-pilots directly with several legal aid groups.

But to build this robust ecosystem, we need local stewards for state jurisdictions across the country — who can take on key leadership roles and decisions — and make sure that there can be A2J AI that responds to local needs but benefits from national resources. Stewards can also help activate local legal teams, so that they are directing the development of AI solutions rather than reacting to others’ AI visions.

We can build legal help AI state by state, team by team, workflow by workflow. But we need stewards who keep clients, communities, and frontline staff at the center, while moving their state forward.

That’s how AI becomes a force for justice — because we designed it that way.

Categories
AI + Access to Justice Current Projects

ICAIL workshop on AI & Access to Justice

The Legal Design Lab is excited to co-organize a new workshop at the International Conference on Artificial Intelligence and Law (ICAIL 2025):

AI for Access to Justice (AI4A2J@ICAIL 2025)
📍 Where? Northwestern University, Chicago, Illinois, USA
🗓 When? June 20, 2025 (Hybrid – in-person and virtual participation available)
📄 Submission Deadline: May 4, 2025
📬 Acceptance Notification: May 18, 2025

Submit a paper here https://easychair.org/cfp/AI4A2JICAIL25

This workshop brings together researchers, technologists, legal aid practitioners, court leaders, policymakers, and interdisciplinary collaborators to explore the potential and pitfalls of using artificial intelligence (AI) to expand access to justice (A2J). It is part of the larger ICAIL 2025 conference, the leading international forum for AI and law research, hosted this year at Northwestern University in Chicago.


Why this workshop?

Legal systems around the world are struggling to meet people’s needs—especially in housing, immigration, debt, and family law. AI tools are increasingly being tested and deployed to address these gaps: from chatbots and form fillers to triage systems and legal document classifiers. Yet these innovations also raise serious questions around risk, bias, transparency, equity, and governance.

This workshop will serve as a venue to:

  • Share and critically assess emerging work on AI-powered legal tools
  • Discuss design, deployment, and evaluation of AI systems in real-world legal contexts
  • Learn from cross-disciplinary perspectives to better guide responsible innovation in justice systems


What are we looking for?

We welcome submissions from a wide range of contributors—academic researchers, practitioners, students, community technologists, court innovators, and more.

We’re seeking:

  • Research papers on AI and A2J
  • Case studies of AI tools used in courts, legal aid, or nonprofit contexts
  • Design proposals or system demos
  • Critical perspectives on the ethics, policy, and governance of AI for justice
  • Evaluation frameworks for AI used in legal services
  • Collaborative, interdisciplinary, or community-centered work

Topics might include (but are not limited to):

  • Legal intake and triage using large language models (LLMs)
  • AI-guided form completion and document assembly
  • Language access and plain language tools powered by AI
  • Risk scoring and case prioritization
  • Participatory design and co-creation with affected communities
  • Bias detection and mitigation in legal AI systems
  • Evaluation methods for LLMs in legal services
  • Open-source or public-interest AI tools

We welcome both completed projects and works-in-progress. Our goal is to foster a diverse conversation that supports learning, experimentation, and critical thinking across the access to justice ecosystem.


Workshop Format

The workshop will be held on June 20, 2025 in hybrid format—with both in-person sessions in Chicago, Illinois and the option for virtual participation. Presenters and attendees are welcome to join from anywhere.


Workshop Committee

  • Hannes Westermann, Maastricht University Faculty of Law
  • Jaromír Savelka, Carnegie Mellon University
  • Marc Lauritsen, Capstone Practice Systems
  • Margaret Hagan, Stanford Law School, Legal Design Lab
  • Quinten Steenhuis, Suffolk University Law School


Submit Your Work

For full submission guidelines, visit the official workshop site:
https://suffolklitlab.org/ai-for-access-to-justice-at-the-international-conference-on-ai-and-law-2025-ai4a2j-icail25/

Submit your paper at EasyChair here.

Submissions are due by May 4, 2025.
Notifications of acceptance will be sent by May 18, 2025.


We’re thrilled to help convene this conversation on the future of AI and justice—and we hope to see your ideas included. Please spread the word to others in your network who are building, researching, or questioning the role of AI in the justice system.

Categories
AI + Access to Justice Current Projects

AI, Machine Translation, and Access to Justice

Lessons from Cristina Llop’s Work on Language Access in the Legal System

Artificial intelligence (AI) and machine translation (MT) are often seen as tools with the potential to expand access to justice, especially for non-English speakers in the U.S. legal system. However, while AI-driven translation tools like Google Translate and AutoML offer impressive accuracy in general contexts, their effectiveness in legal settings remains questionable.

At the Stanford Legal Design Lab’s AI and Access to Justice research webinar on February 7, 2025, legal expert Cristina Llop shared her observations from reviewing live translations between legal providers’ staff and users. Her findings highlight both the potential and pitfalls of using AI for language access in legal settings. This article explores how AI performs in practice, where it can be useful, and why human oversight, national standards, and improved training datasets are critical.

How Machine Translation Performs in Legal Contexts

Many courts and legal service providers have turned to AI-powered Neural Machine Translation (NMT) models like Google Translate to help bridge language barriers. While AI is improving, Llop’s research suggests that accuracy in general language translation does not necessarily translate to legal language accuracy.

1. The Good: AI Can Be Useful in Certain Scenarios

Machine translation tools can provide immediate, cost-effective assistance in specific legal language tasks, such as:

  • Translating declarations and witness statements
  • Converting court forms and pleadings into different languages
  • Making legal guides and court websites more accessible
  • Supporting real-time interpretation in court help centers and clerk offices

This can be especially valuable in resource-strapped courts and legal aid groups that lack human interpreters for every case. However, Llop cautions that even when AI-generated translations sound fluent, they may not be legally precise or safe to rely on.

AI doesn’t pick up on legal context and mis-translates key information about trials, filing, court, and options.

2. The Bad: Accuracy Breaks Down in Legal Contexts

Llop identified systematic mistranslations that could have serious consequences:

Common legal terms are mistranslated due to a lack of specialized training data. For example, “warrant” is often translated as “court order,” which downplays the severity of a legal document.

Contextual misunderstandings lead to serious errors:

  • “Due date” was mistranslated as “date to give birth.”
  • “Trial” was often translated as “test.”
  • “Charged with a battery a case” turned into “loaded with a case of batteries.”

Pronoun confusion creates ambiguity:

  • Spanish’s use of “su” (your/his/her/their) is often mistranslated in legal documents, leading to uncertainty about property ownership, responsibility, or court filings.
  • In restraining order cases, it was unclear who was accusing whom, which could put victims at risk.

AI can introduce gender biases:

  • Words with no inherent gender (e.g., “politician”) are often translated as male.
  • The Spanish “Me Maltrata”, which could be translated either as She mistreats me or He mistreats me — without the gender being specified. The machine would default “me maltrata” as “He mistreats me,” potentially distorting evidence in domestic violence cases.

Without human review, these AI-driven errors can go unnoticed, leading to severe legal consequences.

The Dangers of Mistranslation in Legal Interactions

One of the most troubling findings from Llop’s work was the invisible breakdowns in communication between legal providers and non-English speakers.

1. Parallel Conversations Instead of Communication

In many cases, both parties believed they were exchanging information, but in reality:

  • Legal providers were missing key facts from litigants.
  • Users did not realize that their information was misunderstood or misrepresented.
  • Critical details — such as the nature of an abuse claim or financial disclosures — were being lost.

This failure to communicate accurately could result in:

  • People choosing the wrong legal recourse and misunderstanding what options are available to them.
  • Legal provider staff making decisions based on incomplete or distorted information, providing services and option menus based on misunderstandings about the person’s scenario or preferences.
  • Access to justice being compromised for vulnerable litigants.

2. Why a Glossary Isn’t Enough

Some legal institutions have tried to mitigate errors by adding legal glossaries to machine translation tools. However, Llop’s research found that glossary-based corrections do not always solve the problem:

  • Example 1: The word “address” was provided to the AI to ensure translation to “mailing address” (instead of “home address”) in one context — but then mistakenly applied when a clerk asked, “What issue do you want to address?”
  • Example 2: “Will” (as in a legal document) was mistranslated when applied to the auxiliary verb “will” in regular interactions (“I will send you this form”).
  • Example 3: A glossary fix for “due date” worked .
  • “Example 4: A glossary fix for “pleading” worked but failed to adjust grammatical structure or pronoun usage.”

These patchwork fixes are not enough. More comprehensive training, oversight, and quality control are needed.

Advancing Legal Language AI: AutoML and Human Review

One promising improvement is AutoML, which allows legal organizations to train machine translation models with their own specialized legal data.

AutoML: A Step Forward, But Still Flawed

Llop’s team worked on an AutoML project by:

  1. Collecting 8,000+ legal translation pairs from official legal sources that had been translated by experts.
  2. Correcting AI-generated translations manually.
  3. Feeding improved translations back into the model.
  4. Iterating until translations were more accurate.

Results showed that AutoML improved translation quality, but major issues remained:

  • AI struggled with conversational context. If a prior sentence referenced “my wife,” but the next message about the wife didn’t specify gender, AI might mistakenly switch the pronoun to “he”.
  • AI overfit to common legal phrases, inserting “petition” even when the correct translation should have been “form.”

These challenges highlight why human review is essential.

Real-Time Machine Translation

While text-based AI translation can be refined over time, real-time translation — such as voice-to-text systems in legal offices — presents even greater challenges.

Voice-to-Text Lacks Punctuation Awareness

People do not dictate punctuation, but pauses and commas can change legal meaning. For example:

  • “I’m guilty” vs. “I’m not guilty” (missing comma error).
  • Minor misspellings or poor grammar can dramatically change a translation.

AI Struggles with Speech Patterns

Legal system users come from diverse linguistic backgrounds, making real-time translation even more difficult. AI performs poorly when users:

  • Speak quickly or use filler words (“um,” “huh,” “oh”).
  • Have soft speech or heavy accents.
  • Use sentence structures influenced by indigenous or regional dialects.

These challenges make it clear that AI faces major challenges in performing accurately in high-stakes legal interactions.

The Need for National Standards and Training Datasets

Llop’s research underscores a critical gap: there are no national standards, training datasets, or quality benchmark datasets for legal translation AI.

A National Legal Translation Project

Llop saw an opportunity for improvement if there were to be:

  • A centralized effort to collect high-quality legal translation pairs.
  • State-specific localization of legal terms.
  • Guidelines for AI usage in courts, legal aid orgs, and other institutions.

Such a standardized dataset could train AI more effectively while ensuring legal accuracy.

Training for English-Only Speakers

English-speaking legal provider staff need training on how to structure their speech for better AI translation:

  • Using plain language and short sentences.
  • Avoiding vague pronouns (“his, her, their”).
  • Confirming meaning before finalizing translations.

AI, Human Oversight, and National Infrastructure in Legal Translation

Machine translation and AI can be useful, but they are far from perfect. Without human review, legal expertise, and national standards, AI-generated translations could compromise access to justice.

Llop’s work highlights the urgent need for:

  1. Human-in-the-loop AI translation.
  2. Better training data tailored for legal contexts.
  3. National standards for AI language access.

As AI continues to evolve, it must be designed with legal precision and human oversight — because in law, a mistranslation can change lives.

Get in touch with Cristina Llop to learn more about her work & vision for better language access: https://www.linkedin.com/in/cristina-llop-75749915/

Thanks to her for terrific, detailed presentation at the AI+A2J Research series. Sign up to come to future Zoom webinars in our series.Find out

more about the Stanford Legal Design Lab’s work on AI & Access to justice here.

Categories
AI + Access to Justice Class Blog Current Projects Design Research

Interviewing Legal Experts on the Quality of AI Answers

This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.

Learned Hands Battle Mode

Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.

Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.

Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment. 

We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.

Stay tuned for the results of our expert-led rating games and feedback sessions!

If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.

https://airtable.com/embed/appMxYCJsZZuScuTN/pago0ZNPguYKo46X8/form

Categories
AI + Access to Justice Current Projects

AI+A2J Research x Practice Seminar

The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.

At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.

We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.

Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.

Categories
AI + Access to Justice Current Projects

Schedule for AI & A2J Jurix workshop

Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions.

We are pleased to announce our final schedule for the workshop:

Schedule for the AI & A2J Workshop

Morning Sessions

Welcome Kickoff, 09:00-09:15

Conference organizers welcome everyone, lead introductions, and review the day’s plan.

1: AI-A2J in Practice, 09:15-10:30 AM 

09:15-09:30: Juan David Gutierrez: AI technologies in the judiciary: Critical appraisal of LLMs in judicial decision making

09:30-09:45: Ransom Wydner, Sateesh Nori, Eliza Hong, Sam Flynn, and Ali Cook: AI in Access to Justice: Coalition-Building as Key to Practical and Sustainable Applications

09:45-10:00: Mariana Raquel Mendoza Benza: Insufficient transparency in the use of AI in the judiciary of Peru and Colombia: A challenge to digital transformation

10:00-10:15: Vanja Skoric, Giovanni Sileno, and Sennay Ghebreab: Leveraging public procurement for LLMs in the public sector: Enhancing access to justice responsibly

10:15-10:30: Soumya Kandukuri: Building the AI Flywheel in the American Judiciary

Break: 10:30-11:00 

2: AI for A2J Advice, Issue-Spotting, and Engagement Tasks, 11:00-12:30 

11:00: Opening remarks to the session

11:05-11:20: Sam Harden: Rating the Responses to Legal Questions by Generative AI Models

11:20-11:35: Margaret Hagan: Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories

11:35-11:50: Nick Goodson and Rongfei Lui: Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process

11:50-12:05: Nina Toivonen, Marika Salo-Lahti, Mikko Ranta, and Helena Haapio, Beyond Debt: The Intersection of Justice, Financial Wellbeing and AI

12:05-12:15: Amit Haim: Large Language Models and Legal Advice12:15-12:30: General Discussions, Takeaways, and Next Steps on AI for Advice

Break: 12:30-13:30

Afternoon Sessions

3: AI for Forms, Contracts &  Dispute Resolution, 13:30-15:00 

13:30: Opening remarks to this session13:35-13:50: Quinten Steenhuis, David Colarusso, and Bryce Wiley: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications

13:50-14:05: Katie Atkinson, David Bareham, Trevor Bench-Capon, Jon Collenette, and Jack Mumford: Tackling the Backlog: Support for Completing and Validating Forms

14:05-14:20: Anne Ketola, Helena Haapio, and Robert de Rooy: Chattable Contracts: AI Driven Access to Justice

14:20-14:30: Nishat Hyder-Rahman and Marco Giacalone: The role of generative AI in increasing access to justice in family (patrimonial) law

14:30-15:00: General Discussions, Takeaways, and Next Steps on AI for Forms & Dispute Resolution

Break: 15:00-15:30

4:  AI-A2J Technical Developments, 15:30-16:30

15:30: welcome to session
15:35-15:50: Marco Billi, Alessandro Parenti, Giuseppe Pisano, and Marco Sanchi: A hybrid approach of accessible legal reasoning through large language models
15:50-16:05: Bartosz Krupa – Polish BERT legal language model
16:05-16:20: Jakub Dråpal – Understanding Criminal Courts
16:20-16:30: General Discussion on Technical Developments in AI & A2J

Closing Discussion: 16:30-17:00

What are the connections between the sessions? What next steps do participants think will be useful? What new research questions and efforts might emerge from today?

Categories
AI + Access to Justice Current Projects

Report a problem you’ve found with AI & legal help

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database.

We will be making this database available in the near-future, as we collect more records & review them.

For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:

  • incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
  • hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
  • irrelevant, distracting, or off-topic information;
  • misrepresentation of the law;
  • overly simplified information, that loses key nuance or cautions;
  • otherwise doing something that might be harmful to a person trying to get legal help.

You can send in any incidents you’ve experienced here at this form: https://airtable.com/apprz5bA7ObnwXEAd/shrQoNPeC7iVMxphp 

We will be reviewing submissions & making this incident database available in the future, for those interested.

Fill in the form to report an AI-Justice problem incident

Categories
AI + Access to Justice Current Projects

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.

There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).

See more about the conference at the main JURIX 23 website.

About the AI & A2J workshop

This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.

Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.

What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?

At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:

  • Identify the key challenges and opportunities for using AI to improve access to justice.
  • Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
  • Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
  • Develop proposals for how to hold AI models accountable for the public good.

Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.

Organizers of the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).

Are you generally interested in AI & Access to Justice? Sign up for our Stanford Legal Design Lab AI-A2J interest list to stay in touch.

Submit a paper to the AI & A2J Workshop

We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.

Workshop submissions may be about the topics described above, including:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Deadline extended to November 20, 2023

Submission Link: Submit your 4-12 page paper here: https://easychair.org/my/conference?conf=jurixaia2j

Notification: November 28, 2023

Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands

More about the JURIX Conference

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.

The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.