Categories
AI + Access to Justice Current Projects

AI & Legal Help Crossover Event with computer scientists and lawyers

In July, an interdisciplinary group of researchers at Stanford hosted the “AI and Legal Help Crossover” event, for stakeholders from the civil justice system and computer science to meet, talk, and identify promising next steps to advance the responsible development of AI for improving the justice system.

This builds off of our Spring workshop, co-hosted with the Self-Represented Litigants Network, that led justice professionals through a brainstorm of how AI could help them and their clients around access to justice.

Here are 3 topic areas that arose out of this workshop, that we’re excited to work on more in the future!

Topic 1: Next Steps for advancing AI & Justice work

These are the activities that participants highlighted as valuable for

Events that dive into AI applications, research, and evaluation in specific areas of the justice system. For example, could we hold meetings that focus in on specific topics, like:

  • High volume, quick proceedings like for Debt, Traffic, Parking, and Eviction. These case types might have similar dynamics, processes, and litigant needs.
    • What are the ideas for applications that could improve the quality of justice and outcomes in these areas?
    • What kinds of research, datasets, and protocols might be done in these areas in particular, that would matter to policymakers, service providers, or communities?
  • Innovation hot areas like Eviction Diversion and Criminal Justice Diversion, where there already are many pilots happening to improve outcomes. If there is already energy to pilot new interventions in a particular area, can we amplify these with AI?

Local Justice/AI R&D Community-building, to have regional hubs in areas where there are anchor institutions with AI research/development capacity & those with justice system expertise. Can we have a network of local groups who are working on improving AI development & research? And where local experts in computer science can learn about the opportunities for work with justice system actors — so that they are informed & connected to do relevant work.

Index of Justice/AI Research, Datasets, Pilots, and Partners, so that more people new to this area (both from technical and legal backgrounds) can see what is happening, build relationships, and collaborate with each other.

Domain Expert Community meetings that could attract more legal aid lawyers, self-help court staff, clerks, navigators, judicial officers, and those who have on-the-ground experience with helping litigants through the court system. Could we start gathering and standardizing their expertise — into more formal benchmarks, rating scales, and evaluation protocols?

Unauthorized Practice of Law & Regulatory Discussions to talk through where legal professional rules might play out with AI tools – -and how they might be interpreted or adapted to best protect people from harm while benefiting people with increased access and empowerment.

National Group Leadership and Support, in which professional groups and consortia like the Legal Services Corporation, State Justice Institute Joint Technology Committee, CiTOC, Bureau of Justice Statistics, Department of Justice, or National Center of State Courts could help:

  • Define an agenda for projects, research, and evaluation needs
  • Encourage groups to standardize data and make it available
  • Call for pilots and partnerships, and help with the matchmaking of researchers, developers, and courts/legal aid groups
  • Incentivize pilots and evaluation with funding dedicated to human-centered AI for justice

Topic 2: Tasks where AI might help with justice systems research. 

We grouped the ideas for applications of AI in the justice system into some themes. These resonate with our earlier workshop on ideas for AI in the justice sector, that we held with the Self Represented Litigation Network:

  • Litigant Empowerment applications
  • Service Improvement applications
  • Accountability applications
  • Research & System Design applications

Litigant Empowerment themes

  • AI for litigant decision-making, to help a person understand possible outcomes that may result from a certain claim, defense, or choice they make in the justice system. It could help them be more strategic with their choices, wording, etc. 
  • AI to craft better claims and arguments so that litigants or their advocates could understand the strongest claims, arguments, citations, and evidence to use. 
  • AI for narratives and document completion, to help a litigant quickly from their summary of their facts and experiences to a properly formatted and written court filing.
  • AI for legalese to plain language translation, that could help a person understand a notice, contract, court order, or other legal document they receive.

Service Improvement themes

  • AI for legal aid or court centers to intake/triage users to the right issue area, level of service, and resources they can use.
  • AI for chat and coaching, to package experts’ knowledge about following a court process, filling in a form, preparing for a hearing, or other legal tasks.

Accountability themes

  • AI to spot policy/advocacy targets, where legal aid advocates, attorney general offices, or journalists could see which courts or judges might have issues with the quality of justice in their proceedings, where more training or advocacy for change might be needed.
  • AI to spot fraud, bad practices, and concerning trends. For example, can it scan petitions being filed in debt cases to flag to clerks where the dates, amounts, or claims mean that the claim is not valid? Or can it look through settlement agreements in housing or debt cases to find concerning terms or dynamics?

Research & System Design themes

  • AI to understand where processes need simplification, or where systems need to be reformed. They could understand through user error rates, continuances, low participation rates, or other factors which parts of the justice system are the least accessible — and where rules, services, and technology needs reform.
  • AI for understanding the court’s performance, to see what is happening not only in the case-level data but also at the document-level. This could give much more substance to what processes and outcomes people are experiencing.

Topic 3: Data that will be important to make progress on Justice & AI

Legal Procedure, Services, and Form data, that has been vetted by experts and approved as up to date. This data might then train a model of ‘reliable, authoritative’ legal information for each jurisdiction about what a litigant should know when dealing with a certain legal problem.

  • Could researchers work with the LSC and local legal aid & court groups that maintain self-help content (legal help websites, procedural guides, forms, etc.) to gather this local procedural information – -and then build a model that can deliver high-quality, jurisdiction-specific procedural guidance?

Court Document data, that includes the substance of pleadings, settlements, and judgments. Access to datasets with substantive data about the claims litigants are making, the terms they agree to, and the outcomes in judgments can give needed information for research about the court, and also AI tools for litigants & service providers that analyze, synthesize, and predict.

  • Could courts partner with researchers to make filings and settlement documents available, in an ethical/privacy-friendly way? 

Domain Expert data in which they help rate or label legal data. What is good or bad? What is helpful or harmful? Building successful AI pilots will need input and quality control from domain experts — especially those who see how legal documents, processes, and services play out in practice. 

  • Could justice groups & universities help bring legal experts together to help define standards, label datasets, and give input on the quality of models’ output? What are the structures, incentives, and compensation needed to get legal experts more involved in this?
Categories
AI + Access to Justice Current Projects

Opportunities & Risks for AI, Legal Help, and Access to Justice

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.

Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?

Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?

The Stakeholder Session

In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.

The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.

An overview of what we covered in our stakeholder session with justice professionals.

The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.

  1. How AI could help professionals like legal aid or court staff improve their service offerings
  2. How AI could help community members & providers do legal problem-solving tasks
  3. How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.

Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently

The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:

  1. Helping experts create user-friendly guides to legal processes & rights
  2. Improving the speed & efficacy of tech tool development
  3. Strengthening providers’ ability to connect with clients & build a strong relationship
  4. Streamlining intake and referrals, and improving the creation of legal documents.

Within each of these zones, participants had many specific ideas.

Opportunities for legal aid & court staff to use AI to deliver better services

Opportunity 2: For People & Providers to Do Legal Tasks

The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:

  1. understanding legal rules and policies,
  2. identifying legal issues and directing a person to their menu of legal options,
  3. predicting likely outcomes and facilitating mutual resolutions,
  4. preparing legal documents and evidence, and
  5. aiding in the preparation for in-person presentations and negotiations.
How might AI help people understand their legal problem & navigate it to resolution?

Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.

Opportunity 3: For Org Leadership, Policymaking & Strategies

The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:

  1. training and supporting service providers more efficiently,
  2. optimizing business processes and resource allocation, and
  3. helping leaders identify policy issues and create impactful strategies.
AI opportunities to help justice system leaders

Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.

Possible Risks & Harms of AI in Civil Justice

While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.

The whiteboard of professionals’ 1st round of brainstorming about possible risks to mitigate for a future of AI in the civil justice system

This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.

Infrastructure & initiatives to prioritize now

Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?

The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include

  • ethical data sharing and model building protocols,
  • the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
  • the establishment of benchmarks to measure the performance of AI in legal use cases,
  • the adoption of ethical and professional rules for AI use,
  • recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
  • the creation of guides for litigants and policymakers on using AI for legal help.

Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.

We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.