The Legal Design Lab is excited to co-organize a new workshop at the International Conference on Artificial Intelligence and Law (ICAIL 2025):
AI for Access to Justice (AI4A2J@ICAIL 2025) 📍 Where? Northwestern University, Chicago, Illinois, USA 🗓 When? June 20, 2025 (Hybrid – in-person and virtual participation available) 📄 Submission Deadline: May 4, 2025 📬 Acceptance Notification: May 18, 2025
This workshop brings together researchers, technologists, legal aid practitioners, court leaders, policymakers, and interdisciplinary collaborators to explore the potential and pitfalls of using artificial intelligence (AI) to expand access to justice (A2J). It is part of the larger ICAIL 2025 conference, the leading international forum for AI and law research, hosted this year at Northwestern University in Chicago.
Why this workshop?
Legal systems around the world are struggling to meet people’s needs—especially in housing, immigration, debt, and family law. AI tools are increasingly being tested and deployed to address these gaps: from chatbots and form fillers to triage systems and legal document classifiers. Yet these innovations also raise serious questions around risk, bias, transparency, equity, and governance.
This workshop will serve as a venue to:
Share and critically assess emerging work on AI-powered legal tools
Discuss design, deployment, and evaluation of AI systems in real-world legal contexts
Learn from cross-disciplinary perspectives to better guide responsible innovation in justice systems
What are we looking for?
We welcome submissions from a wide range of contributors—academic researchers, practitioners, students, community technologists, court innovators, and more.
We’re seeking:
Research papers on AI and A2J
Case studies of AI tools used in courts, legal aid, or nonprofit contexts
Design proposals or system demos
Critical perspectives on the ethics, policy, and governance of AI for justice
Evaluation frameworks for AI used in legal services
Collaborative, interdisciplinary, or community-centered work
Topics might include (but are not limited to):
Legal intake and triage using large language models (LLMs)
AI-guided form completion and document assembly
Language access and plain language tools powered by AI
Risk scoring and case prioritization
Participatory design and co-creation with affected communities
Bias detection and mitigation in legal AI systems
Evaluation methods for LLMs in legal services
Open-source or public-interest AI tools
We welcome both completed projects and works-in-progress. Our goal is to foster a diverse conversation that supports learning, experimentation, and critical thinking across the access to justice ecosystem.
Workshop Format
The workshop will be held on June 20, 2025 in hybrid format—with both in-person sessions in Chicago, Illinois and the option for virtual participation. Presenters and attendees are welcome to join from anywhere.
Workshop Committee
Hannes Westermann, Maastricht University Faculty of Law
Jaromír Savelka, Carnegie Mellon University
Marc Lauritsen, Capstone Practice Systems
Margaret Hagan, Stanford Law School, Legal Design Lab
Submissions are due by May 4, 2025. Notifications of acceptance will be sent by May 18, 2025.
We’re thrilled to help convene this conversation on the future of AI and justice—and we hope to see your ideas included. Please spread the word to others in your network who are building, researching, or questioning the role of AI in the justice system.
Lessons from Cristina Llop’s Work on Language Access in the Legal System
Artificial intelligence (AI) and machine translation (MT) are often seen as tools with the potential to expand access to justice, especially for non-English speakers in the U.S. legal system. However, while AI-driven translation tools like Google Translate and AutoML offer impressive accuracy in general contexts, their effectiveness in legal settings remains questionable.
At the Stanford Legal Design Lab’s AI and Access to Justice research webinar on February 7, 2025, legal expert Cristina Llop shared her observations from reviewing live translations between legal providers’ staff and users. Her findings highlight both the potential and pitfalls of using AI for language access in legal settings. This article explores how AI performs in practice, where it can be useful, and why human oversight, national standards, and improved training datasets are critical.
How Machine Translation Performs in Legal Contexts
Many courts and legal service providers have turned to AI-powered Neural Machine Translation (NMT) models like Google Translate to help bridge language barriers. While AI is improving, Llop’s research suggests that accuracy in general language translation does not necessarily translate to legal language accuracy.
1. The Good: AI Can Be Useful in Certain Scenarios
Machine translation tools can provide immediate, cost-effective assistance in specific legal language tasks, such as:
Translating declarations and witness statements
Converting court forms and pleadings into different languages
Making legal guides and court websites more accessible
Supporting real-time interpretation in court help centers and clerk offices
This can be especially valuable in resource-strapped courts and legal aid groups that lack human interpreters for every case. However, Llop cautions that even when AI-generated translations sound fluent, they may not be legally precise or safe to rely on.
AI doesn’t pick up on legal context and mis-translates key information about trials, filing, court, and options.
2. The Bad: Accuracy Breaks Down in Legal Contexts
Llop identified systematic mistranslations that could have serious consequences:
Common legal terms are mistranslated due to a lack of specialized training data. For example, “warrant” is often translated as “court order,” which downplays the severity of a legal document.
Contextual misunderstandings lead to serious errors:
“Due date” was mistranslated as “date to give birth.”
“Trial” was often translated as “test.”
“Charged with a battery a case” turned into “loaded with a case of batteries.”
Pronoun confusion creates ambiguity:
Spanish’s use of “su” (your/his/her/their) is often mistranslated in legal documents, leading to uncertainty about property ownership, responsibility, or court filings.
In restraining order cases, it was unclear who was accusing whom, which could put victims at risk.
AI can introduce gender biases:
Words with no inherent gender (e.g., “politician”) are often translated as male.
The Spanish “Me Maltrata”, which could be translated either as She mistreats me or He mistreats me — without the gender being specified. The machine would default “me maltrata” as “He mistreats me,” potentially distorting evidence in domestic violence cases.
Without human review, these AI-driven errors can go unnoticed, leading to severe legal consequences.
The Dangers of Mistranslation in Legal Interactions
One of the most troubling findings from Llop’s work was the invisible breakdowns in communication between legal providers and non-English speakers.
1. Parallel Conversations Instead of Communication
In many cases, both parties believed they were exchanging information, but in reality:
Legal providers were missing key facts from litigants.
Users did not realize that their information was misunderstood or misrepresented.
Critical details — such as the nature of an abuse claim or financial disclosures — were being lost.
This failure to communicate accurately could result in:
People choosing the wrong legal recourse and misunderstanding what options are available to them.
Legal provider staff making decisions based on incomplete or distorted information, providing services and option menus based on misunderstandings about the person’s scenario or preferences.
Access to justice being compromised for vulnerable litigants.
2. Why a Glossary Isn’t Enough
Some legal institutions have tried to mitigate errors by adding legal glossaries to machine translation tools. However, Llop’s research found that glossary-based corrections do not always solve the problem:
Example 1: The word “address” was provided to the AI to ensure translation to “mailing address” (instead of “home address”) in one context — but then mistakenly applied when a clerk asked, “What issue do you want to address?”
Example 2: “Will” (as in a legal document) was mistranslated when applied to the auxiliary verb “will” in regular interactions (“I will send you this form”).
Example 3: A glossary fix for “due date” worked .
“Example 4: A glossary fix for “pleading” worked but failed to adjust grammatical structure or pronoun usage.”
These patchwork fixes are not enough. More comprehensive training, oversight, and quality control are needed.
Advancing Legal Language AI: AutoML and Human Review
One promising improvement is AutoML, which allows legal organizations to train machine translation models with their own specialized legal data.
AutoML: A Step Forward, But Still Flawed
Llop’s team worked on an AutoML project by:
Collecting 8,000+ legal translation pairs from official legal sources that had been translated by experts.
Correcting AI-generated translations manually.
Feeding improved translations back into the model.
Iterating until translations were more accurate.
Results showed that AutoML improved translation quality, but major issues remained:
AI struggled with conversational context. If a prior sentence referenced “my wife,” but the next message about the wife didn’t specify gender, AI might mistakenly switch the pronoun to “he”.
AI overfit to common legal phrases, inserting “petition” even when the correct translation should have been “form.”
These challenges highlight why human review is essential.
Real-Time Machine Translation
While text-based AI translation can be refined over time, real-time translation — such as voice-to-text systems in legal offices — presents even greater challenges.
Voice-to-Text Lacks Punctuation Awareness
People do not dictate punctuation, but pauses and commas can change legal meaning. For example:
“I’m guilty” vs. “I’m not guilty” (missing comma error).
Minor misspellings or poor grammar can dramatically change a translation.
AI Struggles with Speech Patterns
Legal system users come from diverse linguistic backgrounds, making real-time translation even more difficult. AI performs poorly when users:
Speak quickly or use filler words (“um,” “huh,” “oh”).
Have soft speech or heavy accents.
Use sentence structures influenced by indigenous or regional dialects.
These challenges make it clear that AI faces major challenges in performing accurately in high-stakes legal interactions.
The Need for National Standards and Training Datasets
Llop’s research underscores a critical gap: there are no national standards, training datasets, or quality benchmark datasets for legal translation AI.
A National Legal Translation Project
Llop saw an opportunity for improvement if there were to be:
A centralized effort to collect high-quality legal translation pairs.
State-specific localization of legal terms.
Guidelines for AI usage in courts, legal aid orgs, and other institutions.
Such a standardized dataset could train AI more effectively while ensuring legal accuracy.
Training for English-Only Speakers
English-speaking legal provider staff need training on how to structure their speech for better AI translation:
Using plain language and short sentences.
Avoiding vague pronouns (“his, her, their”).
Confirming meaning before finalizing translations.
AI, Human Oversight, and National Infrastructure in Legal Translation
Machine translation and AI can be useful, but they are far from perfect. Without human review, legal expertise, and national standards, AI-generated translations could compromise access to justice.
Llop’s work highlights the urgent need for:
Human-in-the-loop AI translation.
Better training data tailored for legal contexts.
National standards for AI language access.
As AI continues to evolve, it must be designed with legal precision and human oversight — because in law, a mistranslation can change lives.
This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.
Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.
Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.
Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment.
We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.
Stay tuned for the results of our expert-led rating games and feedback sessions!
If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.
The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.
At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.
We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.
Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.
Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions.
We are pleased to announce our final schedule for the workshop:
Schedule for the AI & A2J Workshop
Morning Sessions
Welcome Kickoff, 09:00-09:15
Conference organizers welcome everyone, lead introductions, and review the day’s plan.
1: AI-A2J in Practice, 09:15-10:30 AM
09:15-09:30: Juan David Gutierrez: AI technologies in the judiciary: Critical appraisal of LLMs in judicial decision making
09:30-09:45: Ransom Wydner, Sateesh Nori, Eliza Hong, Sam Flynn, and Ali Cook: AI in Access to Justice: Coalition-Building as Key to Practical and Sustainable Applications
09:45-10:00: Mariana Raquel Mendoza Benza: Insufficient transparency in the use of AI in the judiciary of Peru and Colombia: A challenge to digital transformation
10:00-10:15: Vanja Skoric, Giovanni Sileno, and Sennay Ghebreab: Leveraging public procurement for LLMs in the public sector: Enhancing access to justice responsibly
10:15-10:30: Soumya Kandukuri: Building the AI Flywheel in the American Judiciary
Break: 10:30-11:00
2: AI for A2J Advice, Issue-Spotting, and Engagement Tasks, 11:00-12:30
11:00: Opening remarks to the session
11:05-11:20: Sam Harden: Rating the Responses to Legal Questions by Generative AI Models
11:20-11:35: Margaret Hagan: Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories
11:35-11:50: Nick Goodson and Rongfei Lui: Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process
11:50-12:05: Nina Toivonen, Marika Salo-Lahti, Mikko Ranta, and Helena Haapio, Beyond Debt: The Intersection of Justice, Financial Wellbeing and AI
12:05-12:15: Amit Haim: Large Language Models and Legal Advice12:15-12:30: General Discussions, Takeaways, and Next Steps on AI for Advice
Break: 12:30-13:30
Afternoon Sessions
3: AI for Forms, Contracts & Dispute Resolution, 13:30-15:00
13:30: Opening remarks to this session13:35-13:50: Quinten Steenhuis, David Colarusso, and Bryce Wiley: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications
13:50-14:05: Katie Atkinson, David Bareham, Trevor Bench-Capon, Jon Collenette, and Jack Mumford: Tackling the Backlog: Support for Completing and Validating Forms
14:05-14:20: Anne Ketola, Helena Haapio, and Robert de Rooy: Chattable Contracts: AI Driven Access to Justice
14:20-14:30: Nishat Hyder-Rahman and Marco Giacalone: The role of generative AI in increasing access to justice in family (patrimonial) law
14:30-15:00: General Discussions, Takeaways, and Next Steps on AI for Forms & Dispute Resolution
Break: 15:00-15:30
4: AI-A2J Technical Developments, 15:30-16:30
15:30: welcome to session 15:35-15:50: Marco Billi, Alessandro Parenti, Giuseppe Pisano, and Marco Sanchi: A hybrid approach of accessible legal reasoning through large language models 15:50-16:05: Bartosz Krupa – Polish BERT legal language model 16:05-16:20: Jakub Dråpal – Understanding Criminal Courts 16:20-16:30: General Discussion on Technical Developments in AI & A2J
Closing Discussion: 16:30-17:00
What are the connections between the sessions? What next steps do participants think will be useful? What new research questions and efforts might emerge from today?
At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.
There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.
The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).
This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.
Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.
What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?
At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:
Identify the key challenges and opportunities for using AI to improve access to justice.
Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
Develop proposals for how to hold AI models accountable for the public good.
Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.
Organizersof the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).
We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.
Workshop submissions may be about the topics described above, including:
findings of research about how AI is affecting access to justice,
evaluation of AI models and tools intended to benefit access to justice,
outcomes of new interventions intended to deploy AI for access to justice,
proposals of future work to use AI or hold AI initiatives accountable,
principles & frameworks to guide work in this area, or
Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands
More about the JURIX Conference
The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.
This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.
The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.
This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe.
I was at a table focused on the intersection of Legal Design, AI platforms, and privacy protections.
Our multidisciplinary group spent several hours getting concrete: what are the scenarios and user needs around privacy & AI platforms? What are the main concerns and design challenges?
We then moved towards an initial brainstorm. What ideas for interventions, infrastructure, or processes could help move AI platforms towards greater privacy protections — and avoid privacy problems that have arisen in similar technology platform advancements in the recent past? What could we learn from privacy challenges, solutions, and failures that came with the rise of websites on the open Internet, the advancement of search engines, and the popular use of social media platforms?
Our group circled around some promising, exciting ideas for cross-Atlantic collaboration. Here is a short recap of them.
Learning from the User Burdens of Privacy Pop-ups & Cookie Banners
Can we avoid putting so many burdens on the user, like with cookie banners and privacy pop-ups on every website? We can learn from the current crop of privacy protections, which warn European visitors when they open any new website and require them to read, choose, and click through pop-up menus about cookies, privacy, and more. What are ways that we can lower these user burdens and privacy burn-out interfaces?
Smart AI privacy warnings, woven into interactions
Can the AI be smart enough to respond with warnings when people are crossing into a high-risk area? Perhaps instead of generalized warnings about privacy implications — a conversational AI agent can let a person know when they are sharing data/asking for information that has a higher risk of harm. This might be when a person asks a question about their health, finances, personal security, divorce/custody, domestic violence, or another topic that could have damaging consequences to them if others (their family members, financial institutions, law enforcement, insurance companies, or other third parties) found out. The AI could be programmed to be privacy-protective, to easily let a person choose at the moment about whether to take the risk of sharing this sensitive data, to help a person understand the risks in this specific domain, and to help the person delete or manage their privacy for this particular interaction.
Choosing the Right Moment for Privacy Warnings & Choices
Can warnings and choices around privacy come during the ‘right moment’? Perhaps it’s not best to warn people before they sign up for a service, or even right when they are logging on. This is typically when people are most hungry for AI interaction & information. They don’t want to be distracted. Rather, can the warning, choices, and settings come during the interactions — or after it? A user is likely to have ‘buyer’s remorse’ with AI platforms: did I overshare? Who can see what I just shared? Could someone find out what I talked about with the AI? How can privacy terms & controls be easily accessible right when people need it, usually during these “clean up” moments?
Conducting More Varied User Research about AI & Protections
We need more user research in different cultures and demographics about how people use AI, relate to it, and critique it (or do not). To figure out how to develop privacy protections, warning/disclosure designs, and other techno-policy solutions, first we need a deeper understanding of various AI users, their needs and preferences, and their willingness to engage with different kinds of protections.
Building an International Network Working on AI & Privacy Protections
Could we have anchor universities, with strong computer science, policy, and law departments, that host workshops and training on the ethical development of AI platforms? These could help bring future technology leaders into cross-disciplinary contact with people from policy and law, to learn about social good matters like privacy. These cross-disciplinary groups could also help policy & law experts learn how to integrate their principles and research into more technical form, like by developing labeled datasets and model benchmarks.—Are you interested in ensuring there is privacy built into AI platforms? Are you working on user, technical, or policy research on what the privacy scenarios, needs, risks, and solutions might be on AI platforms? Please be in touch!Thank you to Dr. Monica Palmirani for leading the Legal Design group at the State of Privacy event, at the lovely Museo Nazionale Etrusco di Villa Giulia in Rome.
DocuBot is a tool to fill in legal documents and other forms through an SMS or other chatbot-like experience. The bot asks questions to fill in the form.
Here is more information from its creator, 1Law.
1LAW is proud to announce the creation of Docubot™, a legal document generating artificial intelligence. In conjunction with some of the best lawyers in the United States, Docubot is drawing on form databases of 1000’s of legal documents. Docubot will assist individuals with legal queries as well as generate documents for them. To help serve Legal Aid, Docubot will allow users to interact via SMS text.
Tech specs:
Written in Go at the server
Powered by Watson – Watson rest API
Swift on the iOS side
Communication via Websocket protocol
Back and forth handled through Websocket
Output –Everything is encrypted
The document is generated using a headless webkit browser that takes an HTML document and outputs a .pdf which is stored in a private S3 bucket and then a short-lived url is generated and sent to a user and each time a user loads the thread they will be given a new url. Document is backed up on the S3 server.
Contact us at: info@www.1law.com for more information.