Categories
AI + Access to Justice Current Projects

Roadmap for AI and Access to Justice

Our Lab is continuing to host meetings & participate in others to scope out what kinds of work needs to happen to make AI work for access to justice.

We will be making a comprehensive roadmap of tasks and goals.

Here is our initial draft — that divides the roadmap between Cross-Issue Tasks (that apply across specific legal problem/policy areas) and Issue-Specific Tasks (where we are still digging into specifics).

These different tasks each might be its own branch of AI agents & evaluation.

Stay tuned for further refinement and testing of this roadmap!

Categories
AI + Access to Justice Current Projects

Share Your AI + Justice Idea

Our team at Legal Design Lab is building a national network of people working on AI projects to close the justice gap, through better legal services & information.

We’re looking to find more people working on innovative new ideas & pilots. Please share with us below using the form.

The idea could be for:

  • A new AI tool or agent, to help you do a specific legal task
  • A new or finetuned AI model for use in the legal domain
  • A benchmark or evaluation protocol to measure the performance of AI
  • A policy or regulation strategy to protect people from AI harms and encourage responsible innovation
  • A collaboration or network initiative to build a stronger ecosystem of people working on AI & justice
  • Another idea you have to improve the development, performance & consumer safety of AI in legal services.

Please be in touch!

Categories
AI + Access to Justice Current Projects

Summit schedule for AI + Access to Justice

This October, Stanford Legal Design Lab hosted the first AI + Access to Justice Summit. This invite-only event focused on building a national ecosystem of innovators, regulators, and supporters to guide AI innovation toward closing the justice gap, while also protecting the public.

The Summit’s flow aimed to teach frontline providers, regulators, and philanthropists about current projects, tools, and protocols to develop impactful justice AI. We did this with hands-on trainings on AI tools, platforms, and privacy/efficiency strategies. We layered on tours of what’s happening with legal aid and court help pilots, and what regulators and foundations are seeing with AI activity by lawyers and the public.

We then moved from review and learning to creative work. We workshoped how to launch new individual model & agent pilots, while weaving a coordinated network with shared infrastructure, models, benchmarks, and protocols. We closed the day with discussion about support — how to mobilize the financial resources, interdisciplinary relationships, and affordable technology access.

Our goal was to launch a coordinated, inspired, strategic cohort, working together across the country to set out a common, ambitious vision. We are so thankful that so many speakers, supporters, and participants joined us to launch this network & lay the groundwork for great work yet to come.

Categories
AI + Access to Justice Current Projects

Housing Law experts wanted for AI evaluation research

We are recruiting Housing Law experts to participate in a study of AI answers to landlord-tenant questions. Please sign up here if you are a housing law practitioner interested in this study.

Experts who participate in interviews and AI-ranking sessions will receive Amazon gift cards for their participation.

Categories
Reading System Evaluation

NCSC User Testing Toolkit

The Access to Justice team at the National Center for State Courts has a new User Testing Toolkit out. It can help courts and their partners get user feedback on key papers, services, and tools, like:

  • Court Forms: are they understandable and actionable?
  • Self-Help Materials: can litigants find and engage with them effectively?
  • Court Websites: are they discoverable, accessible, and useful?
  • Efiling Systems: are they easy to use, and to get right the first time?
  • Signage and Wayfinding: can people easily find their way around in-person and digital court spaces, with dignity?
  • Accessibility: are the courts physical and digital platforms sufficiently easy to use for all different kinds of people?

The toolkit has background guidance on user testing, strategies for planning testing sessions, and example materials to use in planning, recruitment, facilitation, and analysis.

See more:

G. Vazquez, Z. Zarnow. User Testing Toolkit: Improving Court Usability and Access: A Toolkit for Inclusive and Effective User
Testing, Version 1. [Williamsburg, VA: The National Center for State Courts, 2024]: https://www.ncsc.org/__data/assets/pdf_file/0012/104124/User-Testing-Toolkit.pdf

Categories
AI + Access to Justice Current Projects

Design Workbook for Legal Help AI Pilots

For our upcoming AI+Access to Justice Summit and our AI for Legal Help class, our team has made a new design workbook to guide people through scoping a new AI pilot.

We encourage others to use and explore this AI Design Workbook to help think through:

  • Use Cases and Workflows
  • Specific Legal Tasks that AI could do (or should not do)
  • User Personas, and how they might need or worry about AI — or how they might be affected by it
  • Data plans for training AI and for deploying it
  • Risks, laws, ethics brainstorming about what could go wrong or what regulators might require, and mitigation/prevention plans to proactively deal with these concerns
  • Quality and Efficiency Benchmarks to aim for with a new intervention (and how to compare the tech with the human service)
  • Support needed to go into the next phases, of tech prototyping and pilot deployment

Responsible AI development should be going through these 3 careful stages — design and policy research, tech prototyping and benchmark evaluation, and piloting in a controlled, careful way. We hope this workbook can be useful to groups who want to get started on this journey!

Categories
AI + Access to Justice Current Projects

Jurix ’24 AI for Access to Justice Workshop

Building on last year’s very successful academic workshop on AI & Access to Justice at Jurix ’23 in the Netherlands, this year we are pleased to announce a new workshop at Jurix ’24 in Czechia.

Margaret Hagan of the Stanford Legal Design Lab is co-leading an academic workshop at the legal technology conference Jurix, on AI for Access to Justice. Quinten Steenhuis from Suffolk LIT Lab and Hannes Westermann of Maastricht University Faculty of Law will co-lead the workshop.

We invite legal technologists, researchers, and practitioners to join us in Brno, Czechia on December 11th for a full-day, hybrid workshop on innovations in AI for helping close the access to justice gap: the majority of legal problems that go unsolved around the world because potential litigants lack the time, money, or ability to participate in court processes to solve their problems.

See our workshop homepage here for more details on participation.

More on the Workshop

The workshop will be a hybrid event. Workshop participants will be able to participate in-person or remotely via Zoom, although we hope for broad in-person participation. Depending on interest, a selection preference may be given for in-person participation.

The workshop will feature short paper presentations (likely 10 minutes), demos, and if possible, interactive exercises that invite attendees to participate in helping design and solve approaches to closing the access to justice gap with the help of AI.

Like last year, it will be a full-day workshop.

We invite contributors to submit:

  • short papers (5-10 pages), or
  • proposals for demos or interactive workshop exercises

We welcome works in progress, although depending on interest, we will give a preference to complete ideas that can be evaluated, shared and discussed.

The focus of submissions should be on AI tools, datasets, and approaches, whether large language models, traditional machine learning, or rules based systems, that solve the real world problems of unrepresented litigants or legal aid programs. Papers discussing the ethical implications, limits, and policy implications of AI in law are also welcome.

Other topics may include:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Papers should follow the formatting instructions of CEUR-WS.

Submissions will be subject to peer review with an aim to possible publication as a workshop proceeding. Submissions will be evaluated on overall quality, technical depth, relevance, and the diversity of topics to ensure an engaging and high quality workshop.

Important dates

We invite all submissions to be made no later than November 11th, 2024.

We anticipate making decisions by November 22, 2024.

The workshop will be held on December 11, 2024.

Submit your proposals via EasyChair.

Authors are encouraged to submit an abstract even before making a final submission. You can revise your submission until the deadline of November 11th.

More about Jurix

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Brno, Czechia. It will take place on December 11-13, 2024.

Categories
AI + Access to Justice Current Projects

Good/Bad AI Legal Help at Trust and Safety Conference

This week, Margaret Hagan presented at the Trust and Safety Research Conference, that brings together academics, tech professionals, regulators, nonprofits, and philanthropies to work on making the Internet a more safe, user-friendly place.

Margaret presented interim results of the Lab’s expert and user studies of AI’s performance at answering everyday legal questions, like around evictions and other landlord-tenant problems.

Some of the topics for discussion in the audience and panel on the Future of Search:

  • How can regulators, frontline domain experts (like legal aid lawyers and court professionals), and tech companies better work together to spot harmful content, set tailored policies, and ensure better outcomes for users?
  • Should tech companies’ and governments’ policies towards “what is the best way/amount” information for a user differ in different domains? Like perhaps for legal help queries, is it better to encourage more straightforward, simple, directive & authoritative info — or more complex, detailed information that encourages more curiosity and exploration?
  • How do we more proactively spot the harms and risks that might come from new & novel tech systems, that might be quite different than previous search engines or other tech systems?
  • How can we hold tech companies accountable to make more accurate tech systems, without chilling them out of a certain domain (like legal or health), where they don’t want to provide any substantial information for fear of liability?
Categories
AI + Access to Justice Class Blog Current Projects Design Research

Interviewing Legal Experts on the Quality of AI Answers

This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.

Learned Hands Battle Mode

Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.

Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.

Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment. 

We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.

Stay tuned for the results of our expert-led rating games and feedback sessions!

If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.

https://airtable.com/embed/appMxYCJsZZuScuTN/pago0ZNPguYKo46X8/form

Categories
AI + Access to Justice Current Projects

Autumn 24 AI for Legal Help

Our team is excited to announce the new, 2024-25 version of our ongoing class, AI for Legal Help. This school year, we’re moving from background user and expert research towards AI R&D and pilot development.

Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate?

In this course, students will design and develop new demonstration AI projects and pilot plans, combining human-centered design, tech & data work, and law & policy knowledge. 

Students will work on interdisciplinary teams, each partnered with frontline legal aid and court groups interested in using AI to improve their public services. Student teams will help their partners scope specific AI projects, spot and mitigate risks, train a model, test its performance, and think through a plan to safely pilot the AI. 

By the end of the class, students and their partners will co-design new tech pilots to help people dealing with legal problems like evictions, reentry from the criminal justice system, debt collection, and more.

Students will get experience in human-centered AI development, and critical thinking about if and how technology projects can be used in helping the public with a high-stakes legal problem. Along with their AI pilot, teams will establish important guidelines to ensure that new AI projects are centered on the needs of people, and developed with a careful eye towards ethical and legal principles.

Join our policy lab team to do R&D to define the future of AI for legal help.Apply for the class at the SLS Policy Lab link here.