Categories
AI + Access to Justice Current Projects

AI + Access to Justice Summit 2024

On October 17 and 18, 2024 Stanford Legal Design Lab hosted the first-ever AI and Access to Justice Summit.

The Summit’s primary goal was to build strong relationships and a national, coordinated roadmap of how AI can responsibly be deployed and held accountable to close the justice gap.

AI + A2J Summit at Stanford Law School

Who was at the Summit?

Two law firm sponsors, K&L Gates and DLA Piper, supported the Summit through travel scholarships, program costs, and strategic guidance.

The main group of invitees were frontline legal help providers at legal aid groups, law help website teams, and the courts. We know they are key players in deciding what kinds of AI should and could be impactful for closing the justice gap. They’ll also be key partners in developing, piloting, and evaluating new AI solutions.

Key supporters and regional leaders from bar foundations, philanthropies, and pro bono groups were also invited. Their knowledge about funding, scaling, past initiatives, and spreading projects from one organization and region to others was key to the Summit.

Technology developers also came, both from big technology companies like Google and Microsoft and legal technology companies like Josef, Thomson Reuters, Briefpoint, and Paladin. Some of these groups already have AI tools for legal services, but not all of them have focused in on access to justice use cases.

In addition, we invited researchers who are also developing strategies for responsible, privacy-forward, efficient ways of developing specialized AI solutions that could help people in the justice sphere, and also learn from how AI is being deployed in parallel fields like in medicine or mental health.

Finally, we had participants who work in regulation and policy-making at state bars, to talk about policy, ethics, and balancing innovation with consumer protection. The ‘rules of the road’ about what kinds of AI can be built and deployed, and what standards they need to follow, are essential for clarity and predictability among developers.

What Happened at the Summit?

The Summit was a 2-day event, split intentionally into 5 sections:

  • Hands-On AI Training: Examples and Research to upskill legal professionals. There were demo’s, explainers, and strategies about what AI solutions are already in use or possible for legal services. Big tech, legal tech, and computer science researchers presented participants with hands-on, practical, detailed tour of AI tools, examples, and protocols that can be useful in developing new solutions to close the justice gap.
  • Big Vision: Margaret Hagan and Richard Susskind opened up the 2nd day with a challenge: where does the access to justice community want to be in 2030 when it comes to AI and the justice gap? How can individual organizations collaborate, build common infrastructure, and learn from each other to reach our big-picture goals?
  • AI+A2J as of 2024: In the morning of the second day, two panels presented on what is already happening in AI and Access to Justice — including an inventory of current pilots, demo’s of some early legal aid chatbots, regulators’ guidelines, and innovation sandboxes. This can help the group all understand the early-stage developments and policies.
  • Design & Development of New Initiatives. In the afternoon of the second day, we led breakout design workshops on specific use cases: housing law, immigration law, legal aid intake, and document preparation. The diverse stakeholders worked together using our AI Legal Design workbook to scope out a proposal for a new solution — whether that might mean building new technology or adapting off-the-shelf tech to the needs.
  • Support & Collaboration. In the final session, we heard from a panel who could talk through support: financial support, pro bono partnership support, technology company licensing and architecture support, and other ways to build more new interdisciplinary relationships that could unlock the talent, strategy, momentum, and finances necessary to make AI innovation happen. We also discussed support around evaluation so that there could be more data and more feeling of safety in deploying these new tools.

Takeaways from the Summit

The Summit built strong relationships & common understanding among technologists, providers, researchers, and supporters. Our hope is that we can run the Summit annually, to track progress in tackling the justice gap with AI and to observe what progress has been made, year-to-year. It is also to see the development of these relationships, collaborations, and scaling of impact.

In addition, some key points emerged from the training, panels, workshops, and down-time discussions.

Common Infrastructure for AI Development

Though many AI pilots are going to have be local to a specific organization in a specific region, the national (or international) justice community can be working on common resources that can serve as infrastructure to support AI for justice.

  • Common AI Trainings: Regional leaders, who are newly being hired by state bars and bar foundations to train and explore how AI can fit with legal services, should be working together to develop common training, common resources, and common best practices.
  • Project Repository: National organizations and networks should be thinking about a common repository of projects. This inventory could track what tech provider is being used, what benchmark is being used for evaluation, what AI model is being deployed, what date it was fine-tuned on, and if and how others could replicate it.
  • Rules of the Road Trainings. National organizations and local regulators could give more guidance to leadership like legal aid executive directors about what is allowed or not allowed, what is risky or safe, or other clarification that can help more leadership be brave and knowledgeable about how to deploy AI responsibly. When is an AI project sufficiently tested to be released to the public? How should the team be maintaining and tracking an AI project, to ensure it’s mitigating risk sufficiently?
  • Public Education. Technology companies, regulators, and frontline providers need to be talking more about how to make sure that the AI that is already out there (like ChatGPT, Gemini, and Claude) is reliable, has enough guardrails, and is consumer-safe. More research needs to be done on how to encourage strategic caution among the public, so they can use the AI safely and avoid user mistakes with it (like overreliance or misunderstanding).
  • Regulators<->Frontline Providers. More frontline legal help providers need to be in conversation with regulators (like bar associations, attorneys general, or other state/federal agencies) to talk about their perspective on if and how AI can be useful in closing the justice gap. Their perspective on risks, consumer harms, opportunities, and needs from regulators can ensure that rules are being set to maximize positive impact and minimize consumer harm & technology chilling.
  • Bar Foundation Collaboration. Statewide funders (especially bar foundations) can be talking to each other about their funding, scaling, and AI strategies. Well-resourced bar foundations can share how they are distributing money, what kinds of projects they’re incentivizing, how they are holding the projects accountable, and what local resources or protocols they could share with others.

AI for Justice Should be Going Upstream & Going Big

Richard Susskind charged the group with thinking big about AI for justice. His charges & insights inspired many of the participants throughout the Summit, particularly on two points.

Going Big. Susskind called on legal leaders and technologists not to do piecemeal AI innovation (which might well be the default pathway). Rather, he called on them to work in coordination across the country (if not the globe). The focus should be on reimagining how to use AI as a way to make a fundamental, beneficial shift in justice services. This means not just doing small optimizations or tweaks, but shifting the system to work better for users and providers.

Susskind charged us with thinking beyond augmentation to models of serving the public with their justice needs.

Going Upstream. He also charged us with going upstream, figuring out more early ways to spot and get help to people. This means not just adding AI into the current legal aid or court workflow — but developing new service offerings, data links, or community partnerships. Can we prevent more legal problems by using AI before a small problem spirals into a court case or large conflict?

After Susskind’s remarks, I focused in on coordination among legal actors across the country for AI development. Compared to the last 20 years of legal technology development, are there ways to be more coordinated, and also more focused on impact and accountability?

There might be strategic leaders in different regions of the US and in different issue areas (housing, immigration, debt, family, etc) that are spreading

  • best practices,
  • evaluation protocols and benchmarks,
  • licensing arrangements with technology companies
  • bridges with the technology companies
  • conversations with the regulators.

How can the Access to Justice community be more organized so that their voice can be heard as

  • the rules of the road are being defined?
  • technology companies are building and releasing models that the public is going to be using?
  • technology vendors decide if and how they are going to enter this market, and what their pricing and licensing are going to look like?

Ideally, legal aid groups, courts, and bars will be collaborating together to build AI models, agents, and evaluations that can get a significant number of people the legal help they need to resolve their problems — and to ensure that the general, popular AI tools are doing a good job at helping people with their legal problems.

Privacy Engineering & Confidentiality Concerns

One of the main barriers to AI R&D for justice is confidentiality. Legal aid and other help providers have a duty to keep their clients’ data confidential, which restricts their ability to use past data to train models or to use current data to execute tasks through AI. In practice, many legal leaders are nervous about any new technology that requires client data — -will it lead to data leaks, client harms, regulatory actions, bad press or other concerning outcomes?

Our technology developers and researchers had cutting-edge proposals for privacy-forward AI development, that could deal with some of these concerns around confidentiality. THough these privacy engineering strategies are foreign to many lawyers, the technologists broke them down into step-by-step explanations with examples, to help more legal professionals be able to think about data protection in a systematic, engineering way.

Synthetic Data. One of the privacy-forward strategies discussed was synthetic data. With this solution, a developer doesn’t use real, confidential data to train a system. Rather, they create a parallel but fictional set of data — like a doppelganger to the original client data. It’s structurally similar to confidential client data, but it contains no real people’s information. Synthetic data is a common strategy in healthcare technology, where there is a similar emphasis on patient confidentiality.

Neel Guha explained to the participants how synthetic data works, and how they might build a synthetic dataset that is free of identifiable data and does not violate ethical duties to confidentiality. He emphasized that the more legal aid and court groups can develop datasets that are share-able to researchers and the public, the more that researchers and technologists will be attracted to working on justice-tech challenges. More synthetic datasets will both be ethically safe & beneficial to collaboration, scaling, and innovation.

Federated Model Training. Another privacy/confidentiality strategy was Federated Model Training. Google DeepMind team presented on this strategy, taking examples from the health system.

When multiple hospitals all wanted to work on the same project: training an AI model to better spot tuberculosis or other issues on lung X-rays. Each hospital wanted to train the AI model on their existing X-ray data, but they did not want to let this confidential data to leave their servers and go to a centralized server. Sharing the data would break their confidentiality requirements.

So instead, the hospitals decided to go with a Federated Model training protocol. Here, an original, first version of the AI model was taken from the centralized server and then put on each of the hospital’s localized servers. The local version of the AI model would look at that hospital’s X-ray data and train the model on them. Then they would send the model back to the centralized server and accumulate all of the learnings and trainings to make a smart model in the center. The local hospital data was never shared.

In this way, legal aid groups or courts could explore making a centralized model while still keeping each of their confidential data sources on their private, secure servers. Individual case data and confidential data stay local on the local servers, and the smart collective model lives at a centralized place and gradually gets smarter. This technique can also work for training the model over time so that the model can continue to get smart as the information and data continue to grow.

Towards the Next Year of AI for Access to Justice

The Legal Design Lab team thanks all of our participants and sponsors for a tremendous event. We learned so much and built new relationships that we look forward to deepening with more collaborations & projects.

We were excited to hear frontline providers walk away with new ideas, concrete plans for how to borrow from others’ AI pilots, and an understanding of what might be feasible. We were also excited to see new pro bono and funding relationships develop, that can unlock more resources in this space.

Stay tuned as we continue our work on AI R&D, evaluation, and community-building in the access to justice community. We look forward to working towards closing the justice gap, through technology and otherwise!

Categories
AI + Access to Justice Current Projects

Roadmap for AI and Access to Justice

Our Lab is continuing to host meetings & participate in others to scope out what kinds of work needs to happen to make AI work for access to justice.

We will be making a comprehensive roadmap of tasks and goals.

Here is our initial draft — that divides the roadmap between Cross-Issue Tasks (that apply across specific legal problem/policy areas) and Issue-Specific Tasks (where we are still digging into specifics).

These different tasks each might be its own branch of AI agents & evaluation.

Stay tuned for further refinement and testing of this roadmap!

Categories
AI + Access to Justice Current Projects

Share Your AI + Justice Idea

Our team at Legal Design Lab is building a national network of people working on AI projects to close the justice gap, through better legal services & information.

We’re looking to find more people working on innovative new ideas & pilots. Please share with us below using the form.

The idea could be for:

  • A new AI tool or agent, to help you do a specific legal task
  • A new or finetuned AI model for use in the legal domain
  • A benchmark or evaluation protocol to measure the performance of AI
  • A policy or regulation strategy to protect people from AI harms and encourage responsible innovation
  • A collaboration or network initiative to build a stronger ecosystem of people working on AI & justice
  • Another idea you have to improve the development, performance & consumer safety of AI in legal services.

Please be in touch!

Categories
AI + Access to Justice Current Projects

Summit schedule for AI + Access to Justice

This October, Stanford Legal Design Lab hosted the first AI + Access to Justice Summit. This invite-only event focused on building a national ecosystem of innovators, regulators, and supporters to guide AI innovation toward closing the justice gap, while also protecting the public.

The Summit’s flow aimed to teach frontline providers, regulators, and philanthropists about current projects, tools, and protocols to develop impactful justice AI. We did this with hands-on trainings on AI tools, platforms, and privacy/efficiency strategies. We layered on tours of what’s happening with legal aid and court help pilots, and what regulators and foundations are seeing with AI activity by lawyers and the public.

We then moved from review and learning to creative work. We workshoped how to launch new individual model & agent pilots, while weaving a coordinated network with shared infrastructure, models, benchmarks, and protocols. We closed the day with discussion about support — how to mobilize the financial resources, interdisciplinary relationships, and affordable technology access.

Our goal was to launch a coordinated, inspired, strategic cohort, working together across the country to set out a common, ambitious vision. We are so thankful that so many speakers, supporters, and participants joined us to launch this network & lay the groundwork for great work yet to come.

Categories
AI + Access to Justice Current Projects

Housing Law experts wanted for AI evaluation research

We are recruiting Housing Law experts to participate in a study of AI answers to landlord-tenant questions. Please sign up here if you are a housing law practitioner interested in this study.

Experts who participate in interviews and AI-ranking sessions will receive Amazon gift cards for their participation.

Categories
AI + Access to Justice Current Projects

Design Workbook for Legal Help AI Pilots

For our upcoming AI+Access to Justice Summit and our AI for Legal Help class, our team has made a new design workbook to guide people through scoping a new AI pilot.

We encourage others to use and explore this AI Design Workbook to help think through:

  • Use Cases and Workflows
  • Specific Legal Tasks that AI could do (or should not do)
  • User Personas, and how they might need or worry about AI — or how they might be affected by it
  • Data plans for training AI and for deploying it
  • Risks, laws, ethics brainstorming about what could go wrong or what regulators might require, and mitigation/prevention plans to proactively deal with these concerns
  • Quality and Efficiency Benchmarks to aim for with a new intervention (and how to compare the tech with the human service)
  • Support needed to go into the next phases, of tech prototyping and pilot deployment

Responsible AI development should be going through these 3 careful stages — design and policy research, tech prototyping and benchmark evaluation, and piloting in a controlled, careful way. We hope this workbook can be useful to groups who want to get started on this journey!

Categories
AI + Access to Justice Current Projects

Jurix ’24 AI for Access to Justice Workshop

Building on last year’s very successful academic workshop on AI & Access to Justice at Jurix ’23 in the Netherlands, this year we are pleased to announce a new workshop at Jurix ’24 in Czechia.

Margaret Hagan of the Stanford Legal Design Lab is co-leading an academic workshop at the legal technology conference Jurix, on AI for Access to Justice. Quinten Steenhuis from Suffolk LIT Lab and Hannes Westermann of Maastricht University Faculty of Law will co-lead the workshop.

We invite legal technologists, researchers, and practitioners to join us in Brno, Czechia on December 11th for a full-day, hybrid workshop on innovations in AI for helping close the access to justice gap: the majority of legal problems that go unsolved around the world because potential litigants lack the time, money, or ability to participate in court processes to solve their problems.

See our workshop homepage here for more details on participation.

More on the Workshop

The workshop will be a hybrid event. Workshop participants will be able to participate in-person or remotely via Zoom, although we hope for broad in-person participation. Depending on interest, a selection preference may be given for in-person participation.

The workshop will feature short paper presentations (likely 10 minutes), demos, and if possible, interactive exercises that invite attendees to participate in helping design and solve approaches to closing the access to justice gap with the help of AI.

Like last year, it will be a full-day workshop.

We invite contributors to submit:

  • short papers (5-10 pages), or
  • proposals for demos or interactive workshop exercises

We welcome works in progress, although depending on interest, we will give a preference to complete ideas that can be evaluated, shared and discussed.

The focus of submissions should be on AI tools, datasets, and approaches, whether large language models, traditional machine learning, or rules based systems, that solve the real world problems of unrepresented litigants or legal aid programs. Papers discussing the ethical implications, limits, and policy implications of AI in law are also welcome.

Other topics may include:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Papers should follow the formatting instructions of CEUR-WS.

Submissions will be subject to peer review with an aim to possible publication as a workshop proceeding. Submissions will be evaluated on overall quality, technical depth, relevance, and the diversity of topics to ensure an engaging and high quality workshop.

Important dates

We invite all submissions to be made no later than November 11th, 2024.

We anticipate making decisions by November 22, 2024.

The workshop will be held on December 11, 2024.

Submit your proposals via EasyChair.

Authors are encouraged to submit an abstract even before making a final submission. You can revise your submission until the deadline of November 11th.

More about Jurix

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Brno, Czechia. It will take place on December 11-13, 2024.

Categories
AI + Access to Justice Current Projects

Good/Bad AI Legal Help at Trust and Safety Conference

This week, Margaret Hagan presented at the Trust and Safety Research Conference, that brings together academics, tech professionals, regulators, nonprofits, and philanthropies to work on making the Internet a more safe, user-friendly place.

Margaret presented interim results of the Lab’s expert and user studies of AI’s performance at answering everyday legal questions, like around evictions and other landlord-tenant problems.

Some of the topics for discussion in the audience and panel on the Future of Search:

  • How can regulators, frontline domain experts (like legal aid lawyers and court professionals), and tech companies better work together to spot harmful content, set tailored policies, and ensure better outcomes for users?
  • Should tech companies’ and governments’ policies towards “what is the best way/amount” information for a user differ in different domains? Like perhaps for legal help queries, is it better to encourage more straightforward, simple, directive & authoritative info — or more complex, detailed information that encourages more curiosity and exploration?
  • How do we more proactively spot the harms and risks that might come from new & novel tech systems, that might be quite different than previous search engines or other tech systems?
  • How can we hold tech companies accountable to make more accurate tech systems, without chilling them out of a certain domain (like legal or health), where they don’t want to provide any substantial information for fear of liability?
Categories
AI + Access to Justice Class Blog Current Projects Design Research

Interviewing Legal Experts on the Quality of AI Answers

This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.

Learned Hands Battle Mode

Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.

Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.

Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment. 

We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.

Stay tuned for the results of our expert-led rating games and feedback sessions!

If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.

https://airtable.com/embed/appMxYCJsZZuScuTN/pago0ZNPguYKo46X8/form

Categories
AI + Access to Justice Current Projects

Autumn 24 AI for Legal Help

Our team is excited to announce the new, 2024-25 version of our ongoing class, AI for Legal Help. This school year, we’re moving from background user and expert research towards AI R&D and pilot development.

Can AI increase access to justice, by helping people resolve their legal problems in more accessible, equitable, and effective ways? What are the risks that AI poses for people seeking legal guidance, that technical and policy guardrails should mitigate?

In this course, students will design and develop new demonstration AI projects and pilot plans, combining human-centered design, tech & data work, and law & policy knowledge. 

Students will work on interdisciplinary teams, each partnered with frontline legal aid and court groups interested in using AI to improve their public services. Student teams will help their partners scope specific AI projects, spot and mitigate risks, train a model, test its performance, and think through a plan to safely pilot the AI. 

By the end of the class, students and their partners will co-design new tech pilots to help people dealing with legal problems like evictions, reentry from the criminal justice system, debt collection, and more.

Students will get experience in human-centered AI development, and critical thinking about if and how technology projects can be used in helping the public with a high-stakes legal problem. Along with their AI pilot, teams will establish important guidelines to ensure that new AI projects are centered on the needs of people, and developed with a careful eye towards ethical and legal principles.

Join our policy lab team to do R&D to define the future of AI for legal help.Apply for the class at the SLS Policy Lab link here.