Categories
Reading System Evaluation

NCSC User Testing Toolkit

The Access to Justice team at the National Center for State Courts has a new User Testing Toolkit out. It can help courts and their partners get user feedback on key papers, services, and tools, like:

  • Court Forms: are they understandable and actionable?
  • Self-Help Materials: can litigants find and engage with them effectively?
  • Court Websites: are they discoverable, accessible, and useful?
  • Efiling Systems: are they easy to use, and to get right the first time?
  • Signage and Wayfinding: can people easily find their way around in-person and digital court spaces, with dignity?
  • Accessibility: are the courts physical and digital platforms sufficiently easy to use for all different kinds of people?

The toolkit has background guidance on user testing, strategies for planning testing sessions, and example materials to use in planning, recruitment, facilitation, and analysis.

See more:

G. Vazquez, Z. Zarnow. User Testing Toolkit: Improving Court Usability and Access: A Toolkit for Inclusive and Effective User
Testing, Version 1. [Williamsburg, VA: The National Center for State Courts, 2024]: https://www.ncsc.org/__data/assets/pdf_file/0012/104124/User-Testing-Toolkit.pdf

Categories
AI + Access to Justice Current Projects

Design Workbook for Legal Help AI Pilots

For our upcoming AI+Access to Justice Summit and our AI for Legal Help class, our team has made a new design workbook to guide people through scoping a new AI pilot.

We encourage others to use and explore this AI Design Workbook to help think through:

  • Use Cases and Workflows
  • Specific Legal Tasks that AI could do (or should not do)
  • User Personas, and how they might need or worry about AI — or how they might be affected by it
  • Data plans for training AI and for deploying it
  • Risks, laws, ethics brainstorming about what could go wrong or what regulators might require, and mitigation/prevention plans to proactively deal with these concerns
  • Quality and Efficiency Benchmarks to aim for with a new intervention (and how to compare the tech with the human service)
  • Support needed to go into the next phases, of tech prototyping and pilot deployment

Responsible AI development should be going through these 3 careful stages — design and policy research, tech prototyping and benchmark evaluation, and piloting in a controlled, careful way. We hope this workbook can be useful to groups who want to get started on this journey!

Categories
AI + Access to Justice Current Projects

Jurix ’24 AI for Access to Justice Workshop

Building on last year’s very successful academic workshop on AI & Access to Justice at Jurix ’23 in the Netherlands, this year we are pleased to announce a new workshop at Jurix ’24 in Czechia.

Margaret Hagan of the Stanford Legal Design Lab is co-leading an academic workshop at the legal technology conference Jurix, on AI for Access to Justice. Quinten Steenhuis from Suffolk LIT Lab and Hannes Westermann of Maastricht University Faculty of Law will co-lead the workshop.

We invite legal technologists, researchers, and practitioners to join us in Brno, Czechia on December 11th for a full-day, hybrid workshop on innovations in AI for helping close the access to justice gap: the majority of legal problems that go unsolved around the world because potential litigants lack the time, money, or ability to participate in court processes to solve their problems.

See our workshop homepage here for more details on participation.

More on the Workshop

The workshop will be a hybrid event. Workshop participants will be able to participate in-person or remotely via Zoom, although we hope for broad in-person participation. Depending on interest, a selection preference may be given for in-person participation.

The workshop will feature short paper presentations (likely 10 minutes), demos, and if possible, interactive exercises that invite attendees to participate in helping design and solve approaches to closing the access to justice gap with the help of AI.

Like last year, it will be a full-day workshop.

We invite contributors to submit:

  • short papers (5-10 pages), or
  • proposals for demos or interactive workshop exercises

We welcome works in progress, although depending on interest, we will give a preference to complete ideas that can be evaluated, shared and discussed.

The focus of submissions should be on AI tools, datasets, and approaches, whether large language models, traditional machine learning, or rules based systems, that solve the real world problems of unrepresented litigants or legal aid programs. Papers discussing the ethical implications, limits, and policy implications of AI in law are also welcome.

Other topics may include:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Papers should follow the formatting instructions of CEUR-WS.

Submissions will be subject to peer review with an aim to possible publication as a workshop proceeding. Submissions will be evaluated on overall quality, technical depth, relevance, and the diversity of topics to ensure an engaging and high quality workshop.

Important dates

We invite all submissions to be made no later than November 11th, 2024.

We anticipate making decisions by November 22, 2024.

The workshop will be held on December 11, 2024.

Submit your proposals via EasyChair.

Authors are encouraged to submit an abstract even before making a final submission. You can revise your submission until the deadline of November 11th.

More about Jurix

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Brno, Czechia. It will take place on December 11-13, 2024.

Categories
AI + Access to Justice Current Projects

Good/Bad AI Legal Help at Trust and Safety Conference

This week, Margaret Hagan presented at the Trust and Safety Research Conference, that brings together academics, tech professionals, regulators, nonprofits, and philanthropies to work on making the Internet a more safe, user-friendly place.

Margaret presented interim results of the Lab’s expert and user studies of AI’s performance at answering everyday legal questions, like around evictions and other landlord-tenant problems.

Some of the topics for discussion in the audience and panel on the Future of Search:

  • How can regulators, frontline domain experts (like legal aid lawyers and court professionals), and tech companies better work together to spot harmful content, set tailored policies, and ensure better outcomes for users?
  • Should tech companies’ and governments’ policies towards “what is the best way/amount” information for a user differ in different domains? Like perhaps for legal help queries, is it better to encourage more straightforward, simple, directive & authoritative info — or more complex, detailed information that encourages more curiosity and exploration?
  • How do we more proactively spot the harms and risks that might come from new & novel tech systems, that might be quite different than previous search engines or other tech systems?
  • How can we hold tech companies accountable to make more accurate tech systems, without chilling them out of a certain domain (like legal or health), where they don’t want to provide any substantial information for fear of liability?
Categories
AI + Access to Justice Class Blog Current Projects Design Research

Interviewing Legal Experts on the Quality of AI Answers

This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.

Learned Hands Battle Mode

Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.

Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.

Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment. 

We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.

Stay tuned for the results of our expert-led rating games and feedback sessions!

If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.

https://airtable.com/embed/appMxYCJsZZuScuTN/pago0ZNPguYKo46X8/form

Categories
AI + Access to Justice Current Projects

Autumn 24 AI for Legal Help

Our team is excited to announce the new, 2024-25 version of our ongoing class, AI for Legal Help. This school year, we’re moving from background user and expert research towards AI R&D and pilot development.

Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate?

In this course, students will design and develop new demonstration AI projects and pilot plans, combining human-centered design, tech & data work, and law & policy knowledge. 

Students will work on interdisciplinary teams, each partnered with frontline legal aid and court groups interested in using AI to improve their public services. Student teams will help their partners scope specific AI projects, spot and mitigate risks, train a model, test its performance, and think through a plan to safely pilot the AI. 

By the end of the class, students and their partners will co-design new tech pilots to help people dealing with legal problems like evictions, reentry from the criminal justice system, debt collection, and more.

Students will get experience in human-centered AI development, and critical thinking about if and how technology projects can be used in helping the public with a high-stakes legal problem. Along with their AI pilot, teams will establish important guidelines to ensure that new AI projects are centered on the needs of people, and developed with a careful eye towards ethical and legal principles.

Join our policy lab team to do R&D to define the future of AI for legal help.Apply for the class at the SLS Policy Lab link here.

Categories
AI + Access to Justice Current Projects

AI+A2J Research x Practice Seminar

The Legal Design Lab is proud to announce a new monthly online, public seminar on AI & Access to Justice: Research x Practice.

At this seminar, we’ll be bringing together leading academic researchers with practitioners and policymakers, who are all working on how to make the justice system more people-centered, innovative, and accessible through AI. Each seminar will feature a presentation from either an academic or practitioner who is working in this area & has been gathering data on what they’re learning. The presentations could be academic studies about user needs or the performance of technology, or less academic program evaluations or case studies from the field.

We look forward to building a community where researchers and practitioners in the justice space can make connections, build new collaborations, and advance the field of access to justice.

Sign up for the AI&A2J Research x Practice seminar, every first Friday of the month on Zoom.

Categories
AI + Access to Justice Current Projects

AI and A2J webinar series: Hannes Westermann — The Hammer and the Carpenter

By Nóra Al Haider

When you explore the online legal community space, you’ll notice that the topic of AI and Access to Justice (A2J) comes up quite frequently. Most agree that AI & A2J is a consequential topic. However, even though the importance of the topic is recognized, the phrase itself remains nebulous. What do we mean by AI & A2J? What kind of experiments are researchers and practitioners conducting? How are new tools and projects evaluated?

To delve into these questions, the Stanford Legal Design Lab recently initiated a webinar series. Each month, a presenter is invited to discuss a study or project in the AI & A2J space. Attendees can learn about new AI & A2J projects, ask questions, make connections, and find new ideas or protocols for their own work. The ultimate goal is to build more collaborations between researchers, service providers, technologists and policymakers.

The inaugural presenter for the webinar was Hannes Westermann. Hannes is an Assistant Professor in Law & Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab. His current research focuses on using generative AI to enhance access to justice in a safe and practical manner.

Generative AI for Access to Justice

Generative AI can be a valuable addition in the access to justice space. Laypeople often have difficulty resolving their everyday legal issues, from debt to employment problems. This struggle is partly due to the complexity of the law. It is challenging for people to understand how laws apply to them and what actions they should take regarding their legal issues, assuming they can identify they have a legal issue in the first place.

Moreover, the cost of going to court can be high, not just financially but also in terms of time and emotional stress. This is particularly true if individuals are unaware of how the process works and how they should interact with the court system.

Generative AI could help alleviate some of these issues and create more opportunities for users to interact with the legal system in a user-centered way.

Hannes spotlighted three projects during his presentation that address these issues.

Justice Bot: Aiding Landlord-Tenant Disputes

The JusticeBot project, developed at the Cyberjustice Laboratory in Montreal, provides legal information to laypeople. The first version, available at https://justicebot.ca (in French) addresses landlord-tenant disputes. The bot asks users questions about their landlord-tenant issues and provides legal information based on their responses. Users start by indicating whether they are a landlord or tenant. Based on their choice, the system presents a series of questions tailored to common issues in that category, such as unpaid rent or early lease termination. The system guides users through these questions, providing explanations and references to relevant legal texts.

For instance, if a landlord indicates that their tenant is frequently late with rent payments, the JusticeBot asks follow-up questions to determine the frequency and impact of these late payments. It then provides information on the landlord’s legal rights and possible next steps, such as terminating the lease or seeking compensation.

The team at the Cyberjustice Laboratory collaborated with the Tribunal administratif du logement (TAL) in developing and marketing the JusticeBot. The TAL receives over 70,000 claims and over a million calls annually. By automating the initial information-gathering process, the JusticeBot could potentially alleviate some of this demand, allowing users to resolve issues without immediate legal intervention. So far, the JusticeBot has been used over 35k times.

The first iteration of the bot was built like a logic tree, where there was a logical connection between the questions and answers, making it possible to verify the accuracy of the legal information. In recent years, Westermann and his team have experimented with integrating language models such as GPT-4 into the JusticeBot (see here and here). This hybrid approach could ensure the accuracy of the information while enhancing the human-centered interface of the bot, and increase the efficiency of creating new bots.

DALLMA: Document Assistance

The next project Hannes discussed is DALLMA. DALLMA stands for Document Automation, Large Language Model Assisted. This early-stage project aims to automate the drafting of legal documents using large language models. The current version focuses on forms, as people often find them complicated to fill out. AI is utilized to fill in structured information into legal documents. Users provide basic information and context, and the AI assists in structuring and populating the document with relevant legal content. In the future, this could increase efficiency in drafting forms and other legal documents.

LLMediator: Enhancing Online Dispute Resolution

The LLMediator explores the use of large language models in online dispute resolution (ODR). The LLMediator makes suggestions on how to phrase communications more amicably during disputes. It analyzes the content and sentiment of the message to prevent escalation and promote resolution. For example, if the LLMediator detects aggressive language that could escalate the conflict, the AI might suggest more constructive phrasing. It can also suggest a potential intervention message to a human mediator, supporting them in their work of encouraging a friendly resolution to the dispute. In short, it acts as a virtual assistant that supports mediators and parties by providing suggestions while still allowing the user to make the final decision.

The Challenges of AI

The projects presented by Hannes show the promise of integrating LLM into the A2J space. However, it is important to be aware of the challenges involved in integrating such instruments into the legal system. One issue is hallucinations. This occurs when the AI generates plausible-sounding but incorrect information, which can be an issue in the legal domain. Hannes explains that this happens because the AI predicts the most probable continuation of a phrase based on its training data, but it does not guarantee accuracy. More research needs to be done to find ways to mitigate these issues. One potential solution is the conceptualization of systems as “augmented intelligence”, as demonstrated e.g. in the LLMediator project. In this approach, the AI system does not provide predictions or recommendations to the user. Rather, it provides information or suggestions that can help the users make better decisions or accomplish tasks more efficiently.

Another potential solution would be to combine AI systems with transparent, logical reasoning systems, as shown e.g. in DALLMA. This approach has the potential to combine the power of large language models with legal expert knowledge, to ensure that users receive accurate legal information. This approach could also help tackle biases that may be present in the training datasets of AI models.

Privacy is another concern, especially in the legal field, which deals with large amounts of sensitive and confidential information. This data can be sent to external servers when using large language models. However, Hannes notes that recent developments in AI technology have led to powerful local AI models that offer more privacy protections. AI providers could also offer contractual guarantees of data protection.

To make sure that AI is implemented in a safe and practical manner in the legal system, it is important to keep these and other challenges in mind. Potential ways of mitigating the challenges include technical innovations and evaluations, regulatory and ethical considerations, guidelines for use of AI in legal contexts, and education for users about the limitations of AI and the importance of verifying the information received through AI models.

Future direction

Hannes concludes his presentation by stating that generative AI should be viewed as a powerful tool that augments human intelligence. The analogy he uses is that of the hammer and carpenter.

“Will law be replaced by AI is a bit like asking: ‘Will the carpenter be replaced by the hammer?’ It just kind of doesn’t make sense as a question, as the hammer is a tool used by the carpenter and not as a replacement for them.”

AI is a powerful tool that can be a useful addition to use cases in the access to justice space. More research needs to be done to better understand the use cases and evaluate the tools. Hannes hopes that the community will engage with the systems and understand what they have to offer so that we can leverage AI to increase access to justice in a safe way.

Read more about Hannes’ work here:

https://scholar.google.com/citations?user=rJvk-twAAAAJ&hl=en

Categories
AI + Access to Justice Current Projects

AI & Legal Help at Codex FutureLaw

At the April 2024 Stanford Codex FutureLaw Conference, our team at Legal Design Lab both presented the research findings about users’ and subject matter experts’ approaches to AI for legal help, and to lead a half-day interdisciplinary workshop on what future directions are possible in this space.

Many of the audience members in both sessions were technologists interested in the legal space, who are not necessarily familiar with the problems and opportunities for legal aid groups, courts, and people with civil legal problems. Our goal was to help them understand the “access to justice” space and spot opportunities to which their development & research work could relate.

Some of the ideas that emerged in our hands-on workshop included the following possible AI + A2J innovations:

AI to Scan Scary Legal Documents

Several groups identified that AI could help a person, who has received an intimidating legal document — a notice, a rap sheet, an immigration letter, a summons and complaint, a judge’s order, a discovery request, etc. AI could let them take a picture of the document, synthesize the information, present it back with a summary of what it’s about, what important action items are, and how to get started on dealing with it.

It could make this document interactive through FAQs, service referrals, or a chatbot that lets a person understand and respond to it. It could help people take action on these important but off-putting documents, rather than avoid them.

Using AI for Better Gatekeeping of Eviction Notices & Lawsuits

One group proposed that a future AI-powered system could screen possible eviction notices or lawsuit filings, to check if the landlord or property manager has fulfilled all obligations and m

  • Landlords must upload notices.
  • AI tools review the notice: is it valid? have they done all they can to comply with legal and policy requirements? is there any chance to promote cooperative dispute resolution at this early stage?
  • If the AI lives at the court clerk level, it might help court staff better detect errors, deficiencies, and other problems that better help them allocate limited human review.

AI to empower people without lawyers to respond to a lawsuit

In addition, AI could help the respondent (tenant) prepare their side, helping them to present evidence, prep court documents, understand court hearing expectations, and draft letters or forms to send.

Future AI tools could help them understand their case, make decisions, and get work product created with little burden.

With a topic like child support modification, AI could help a person negotiate a resolution with the other party, or do a trial run to see how a possible negotiation might go. It could also change their tone, to take a highly emotional negotiation request and transform it to be more likely to get a positive, cooperative reply from the other party.

AI to make Legal Help Info More Accessible

Another group proposed that AI could be integrated into legal aid, law library, and court help centers to:

  • Better create and maintain inter-organization referrals, so there are warm handoffs and not confusing roundabouts when people seek help
  • Clearer, better maintained, more organized websites for a jurisdiction, with the best quality resources curated and staged for easy navigation
  • Multi-modal presentations, to make information available in different visual presentations and languages
  • Providing more information in speech-to-text format, conversational chats, and across different dialects. This was especially highlighted in immigration legal services.

AI to upskill students & pro bono clinics

Several groups talked about AI for training and providing expert guidance to staff, law students, and pro bono volunteers to improve their capacity to serve members of the public.

AI tools could be used in simulations to better educate people in a new legal practice area, and also supplement their knowledge when providing services. Expert practitioners can supply knowledge to the tools, that can then be used by novice practitioners so that they can provide higher-quality services more efficiently in pro bono or law student clinics.

AI could also be used in community centers or other places where community justice workers operate, to get higher quality legal help to people who don’t have access to lawyers or who do not want to use lawyers.

AI to improve legal aid lawyers’ capacity

Several groups proposed AI that could be used behind-the-scenes by expert legal aid or court help lawyers. They could use AI to automate, draft, or speed up the work that they’re already doing. This could include:

  • Improving intake, screening, routing, and summaries of possible incoming cases
  • Drafting first versions of briefs, forms, affidavits, requests, motions, and other legal writing
  • Documenting their entire workflow & finding where AI can fit in.

Cross-Cutting action items for AI+ A2J

Across the many conversations, some common tasks emerged that cross different stakeholders and topics.

Reliable AI Benchmarks:

We as a justice community need to establish solid benchmarks to test AI effectiveness. We can use these benchmarks to focus on relevant metrics.

In addition, we need to regularly report on and track AI performance at different A2J tasks.

This can help us create feedback loops for continuous improvement.

Data Handling and Feedback:

The community needs reliable strategies and rules for how to do AI work that respects obligations for confidentiality and privacy.

Can there be more synthetic datasets that still represent what’s happening in legal aid and court practice, so they don’t need to share actual client information to train models?

Can there be better Personally Identifiable Information (PII) redaction for data sharing?

Who can offer guidance on what kinds of data practices are ethical and responsible?

Low-Code AI Systems:

The justice community is never going to have large tech, data, or AI working groups within their legal aid or court organization. They are going to need low-code solutions that will let them deploy AI systems, fine-tune them, and maintain them without a huge technical requirement.

Overall, the presentation, Q&A, and workshop all pointed to enthusiasm for responsible innovation in the AI+A2J space. Tech developers, legal experts, and strategists are excited about the opportunity to improve access to justice through AI-driven solutions, and enhance efficiency and effectiveness in legal aid. With more brainstormed ideas for solutions in this space, now it is time to move towards R&D incubation that can help us understand what is feasible and valuable in practice.

Categories
AI + Access to Justice Current Projects

Law360 Article on Legal Design Lab’s AI-Justice Work

In early May 2024, the Stanford Legal Design Lab’s work was profiled in the Law360 publication.

The article summarizes the Legal Design Lab’s work, partnerships & human-centered design approach to tackle legal challenges & develop new technologies.

The article covers our recent user & legal help provider research, our initial phase of groundwork research, and our new phase of R&D to see if we can develop legal AI solutions in partnership with frontline providers.

Finally, the article touches on our research on quality metrics & our upcoming AI platform audit.