Categories
AI + Access to Justice Current Projects

3 Shifts for AI in the Justice System: LSC 50th Anniversary presentation

In mid-April, Margaret Hagan presented on the Lab’s research and development efforts around AI and access to justice at the Legal Services Corporation 50th anniversary forum. This large gathering of legal aid executive directors, national justice leaders, members of Congress, philanthropists, and corporate leaders celebrated the work of LSC & profiled future directions of legal services.

Margaret was on a panel along with legal aid leader Sateesh Nori, Suffolk Law School Dean Andy Perlman, and former LSC president James Sandman.

She presented 3 big takeaways for the audience, about how to approach if and how AI should be used to close on the justice gap — especially to move beyond gut reactions & anecdotes that tend towards too much optimism or skepticism. Based on the lab’s research and design activities she proposed 3 big shifts for civil justice leaders towards generative AI.

Shift 1: Towards Techno-Realism

This shift away from hardline camps about too much optimism or pessimism about AI’s potential futures can lead us to more empirical, detailed work. Where are the specific tasks where AI can be helpful? Can we demonstrate with lab studies and controlled pilots exactly if AI can perform better than humans at these specific tasks — with equal or higher quality, and efficiency? This move towards applied research can lead towards more responsible innovation, rather than rushing towards AI applications too quickly or chilling the innovation space pre-emptively.

Shift 2: From Reactive to Proactive leadership

The second shift is how lawyers and justice professionals approach the world of AI. Will they be reactive to what technologists put out to the public, trying to create the right mix of norms, lawsuits, and regulations that can try to push AI towards being safe enough, and also quality enough for legal use cases?

Instead, they can be proactive. They can be running R&D cohorts to see what AI is good at, what risks and harms emerge in these test applications, and then work with AI companies and regulators to better encourage the AI strengths and mitigate its risks. This means joining together with technologists (especially those at universities and benefit corporations) to do hands-on, exploratory demonstration project development to better inform investments, regulation, and other policy-making on AI for justice use cases.

Shift 3: Local Pilots to Coordinated Network

The final shift is about how innovators work. Legal aid groups or court staff could launch AI pilots on their own, building out a new application or bot for their local jurisdiction, and then share it at upcoming conferences to let others know about it. Or, from the beginning, they could be crafting their technical system, UX design, vendor relationships, data management, and safety evaluations in concert with others around the country who are working on similar efforts. Even if the ultimate application is run and managed locally, much of the infrastructure can be shared in national cohorts. These national cohorts can also help gather data, experiences, risk/harm incidents, and other important information that can help guide task forces, attorneys general, tech companies, and others setting the policies for legal help AI in the future.

See more of the presentation in the slides below.

Categories
AI + Access to Justice Current Projects

User interviews on AI & Access to Justice

As we continue to run interviews with people from across the country about their possible use of AI for legal help tasks, we wanted to share out what we’re learning about people’s thoughts about AI.Please see the full interactive Data Dashboard of interview results here.

Below, find images of the data dashboard. Follow the link above to interact more with the data.

We will also maintain a central page of user research findings on AI & Access to Justice here.

Below, find the results of our interviews as of early 2024.

We asked people to self-assess their ability to solve legal problems and to use the Internet to solve life problems.

We also asked them how often they use the Internet.

Finally, we asked them about their past use of generative AI tools like ChatGPT, Bing/CoPilot, or Bard/Gemini.

Trust & Value of AI to Participants

We asked people at the beginning of the interview how much they would trust what AI would tell them for a legal problem.

We asked them the same question after they tried out an AI tool for a fictional legal problem of getting an eviction notice from their landlord.

We also asked them how helpful the AI was in dealing with the fictional problem, and how likely they would be to use this in the future for similar problems.

Preferences for possible AI tool features

We presented a variety of possible interface & policy changes, that could be made to an AI platform.

We asked the participants to rank the utility of these different possible changes.

Categories
AI + Access to Justice Current Projects

AI as the next frontier of making justice accessible

Last week, Margaret had the privilege of presenting on the lab’s work on AI and Innovation at the American Academy of Arts and Sciences in Cambridge, Massachusetts.

As a part of the larger conference of Making Justice Accessible, her work was featured on the panel about new solutions to improve the civil justice system through technology.

She discussed how the Lab’s current research and development work around AI has grown out of a larger question about helping people who are increasingly going online to find legal help.

The AI work is an outgrowth of previous work on

  • improving legal help websites,
  • auditing and improving search engines’ treatment of legal queries,
  • working on new ways to present information in more visual and plain language ways, and
  • building cohorts of providers across regions to have more standardized and discoverable help online.

This panel also included presentations on other, linked efforts to use technology to improve civil justice, including Georgetown’s Judicial Innovation Fellowship program, Stanford’s Filing Fairness Project, and Suffolk LIT Lab’s document assembly and efiling efforts.

Categories
AI + Access to Justice Current Projects

AI & Justice Workers

At the Arizona State University/American Bar Foundation conference on the Future of Justice Work, Margaret Hagan spoke on if and how generative AI might be part of new service and business models to serve people with legal problems.

Many in the audience are already developing new staffing & service models, that combine traditional lawyer-provided services with help provided by community-justice workers.

In the conference’s final session, the panelists discussed how technology — particularly the new generative AI models — might also figure into new initiatives to better reach & serve people struggling with eviction orders, bad living conditions, domestic violence, debt collection, custody problems, and more.

Margaret presented a brief summary of the Legal Design Lab’s work on user research into what people need & want from legal AI, how they currently use AI tools, what justice professionals are brainstorming as possible AI-powered justice work, and metrics and benchmark protocols to evaluate the AI.

Possible AI-powered justice work zones

This clear listing of the tasks that go into “legal work” and “legal services” that we need to do in AI – -is similar to what people working on new community justice worker models are also doing.

Breaking legal work apart into these tasks can help us think systematically about new, stratified models of delivering services.

  • Inside of these zones of work, what are the specific tasks that exist (that lawyers and legal org staff currently do, or should be doing)?
  • Who can and should be best doing this task?
    • Only Seasoned Lawyers: Which of the tasks can only be done by expert lawyers, with JDs, bar admissions, and multiple years practicing in a given problem area & on this task?
    • Medium-to-Novice Lawyers: Which of the tasks can be done by medium-to-novice lawyers, with JDs, bar admission, but little to no practice experience in this problem area or on this task (like pro bono volunteers, or new lawyers)?
    • Seasoned Justice Workers: Which of the tasks can be done by people who are paralegals, advocates, volunteers, social workers, and other community justice workers who have multiple years working this problem area & doing this kind of task?
    • Medium-to-Novice Justice Workers: Which of the tasks can be done by community justice workers who are new to this problem area & task?
    • Tech + Lawyer/Justice Worker: Which of these tasks can be done by technology (initial draft/work product) then reviewed by a lawyer or justice worker?
    • Technology: Which of these tasks can be done technology without human review?

Ideally, our justice community will have more of these discussions about the future of providing services with smart, safe models that can improve capacity & people’s outcomes.

Categories
AI + Access to Justice Current Projects

Bringing an AI & Access to Justice Community Together

Last week, our team at the Legal Design Lab presented at the Legal Services Corporation Innovations in Technology Conference on several topics: how to build better legal help websites, how to improve e-filing and forms in state court systems, and how to use texting to assess brief services for housing matters.

One of the most popular, large sessions we co-ran was on Generative AI and Access to Justice. In this panel to several hundred participants, Margaret Hagan, Quentin Steenhuis at Suffolk LIT Lab, and Hannes Westerman from CyberJustice Lab in Montreal presented on opportunities, user research, demonstrations, and risks and safety of AI and access to justice.

We started the session by doing a poll of the several hundred participants. We asked them a few questions

  • are you optimistic, pessimistic, or in between on the future of AI and access to justice?
  • what are you working on in this area, do you have projects to share?
  • What words come to mind when you think about AI and access to justice?

Margaret presented on opportunities for AI & A2J, user research the Lab is doing on how people use AI for legal problem-solving, and what justice experts have said are the metrics to look at when evaluating AI. Quentin explained Generative AI & demonstrated Suffolk LIT Lab’s document automation AI work. Hannes presented on several AI-A2J projects he has worked on, including JusticeBot for housing law in Quebec, Canada, and LLMediator which he has worked on with Jaromir Savelka and Karim Benyekhlef to improve dispute resolution among people in a conflict.

We then went through a hands-on group exercise to spot AI opportunities in a particular person’s legal problem-solving journey & talk through risks, guardrails, and policies to improve the safety of AI.

Here is some of what we learned at the presentation and some thoughts about moving forward on AI & Access to Justice.

Cautious Optimism about AI & Justice

Many justice professionals, especially the folks who had come to this conference and joined the AI session, are optimistic about Artificial Intelligence’s future impact on the justice system — or are waiting to see. A much smaller percentage is pessimistic about how AI will play out for access to justice.

We had 38 respondents to our poll before our presentation started, and here’s the breakdown of optimism-pessimism.

In our follow-up conversations, we heard regularly that people were excited about the possibility of AI to provide services at scale, affordably to more people (aka, ‘closing the justice gap’) — but that the regulation and controls need to be in place to prevent harm.

Looking at the word cloud of people’s responses to the question “What words come to mind when you think about AI & the justice system?” further demonstrates this cautious optimism (with a focus on smart risk mitigation to empower good, impactful innovation).

This cautious optimism is in contrast to a totally cold, prohibitive approach to AI. If we saw more pessimism, we might have heard more people expressing that there should be no use of AI in the justice system. But at least among the group who attended this conference, we saw little of that perspective (that AI should be avoided or shut down in legal services, for fear of its possibility to harm). Rather, people seemed open to exploring, testing, and collaborating on AI & Access to Justice projects, as long as there was a strong focus on protecting against bias, mistakes, and other risks that could lead to harm of the public.

We need to talk about risks more specifically

That said, despite the pervasive concern about risk and harm, there’s no clear framework on how to protect people from them as of yet.

This could be symptomatic of the broader way that legal services have been regulated in the past. That instead of talking about specific risks, we speak in generalizations about ‘protecting the consumer’. We don’t have a clear typology of what mistakes can happen, what harms can occur, how important/severe these are, and how to protect against them.

Because of this lack of a clear risk framework, most discussions about how to spot risks and harms of AI are general, anecdotal, and high-level. Even if everyone agrees that we need safety rules, including technical and policy-based interventions, we don’t have a clear menu of what those can be.

This is likely to be a large, multi-stakeholder, multi-phased process — but we need more consensus on a working framework of what risks and mistakes to watch out for, how much to prioritize them, and what kinds of things can protect people from them. Hopefully, there will be more government agencies and cross-state consortiums working on these actionable policy frameworks that can encourage responsible innovation.

Demos & User Journeys get to good AI brainstorms

Talking about AI (or brainstorming about it) can be intimidating for non-technical folks in the justice system. They may feel that it’s unclear or difficult to know where to begin when thinking about how AI could help them deliver services better, how clients could benefit, or how it could play a good role in delivering justice.

Demonstration projects, like those shared by Quinten and Hannes, are beneficial to legal aid, court staff, and other legal professionals. These concrete, specific demos allow people to see exactly how AI solutions could play out — and then riff on these demo’s, to think through variations of how AI could help on client tasks, service provider tasks, executive directors’, funders’, or otherwise.

Demo projects don’t have to be live, in-the-field AI efforts. Rather, showing early-stage versions of AI or even more provocative ‘pie-in-the-sky’ AI applications can help spark more creativity in the justice professional community, get into more specific conversations about risks and harms, and help drive momentum to make good responsible innovation happen.

Aside from demo’s of AI projects, user journey exercises can also be a great way to spark a productive brainstorm of opportunities, risks, and policies.

In the second half of the presentation, we ran an interactive workshop. We shared a user story of someone going through a problem with their landlord, in which their Milwaukee apartment had the heat off and it wasn’t getting fixed.

We walked through a status quo user journey, and which they tried to seek legal help, got delayed, made some connections, and got connected with someone to do a demand letter.

We asked all of the participants to work in small groups, to identify where in the status quo user journey, AI could be of help. They brainstormed lots of ideas for the particular touchpoints and stakeholders: for the user, friends and family, community organization, legal aid, and pro bono groups. We then asked them to spot safety risks and possible harms, and finally to propose ways to mitigate these risks.

This kind of specific user journey and case-type exercise can help people more clearly see how they could apply the general things they’re learning about AI, to specific legal services. It inspires more creativity and gets more common collaboration to happen about where the priority should be.

Need for a Common AI-A2J Agenda of Tasks

During our exercise and follow-up conversations, we saw a laundry list emerge of possible ways AI could help different stakeholders in the justice system. This flare-out of ideas is exciting but also overwhelming.

Which ideas are worth pursuing, funding, and piloting first?

We need a working agenda of AI and Access to Justice tasks. Participants discussed many different kinds of tasks that AI could help with:

  • drafting demand letters,
  • doing smarter triage,
  • referral of people to services that can be a good fit,
  • screening frequent, repeat plaintiffs’ filings for their accuracy and legitimacy,
  • providing language access,
  • sending reminders and empowerment coaching
  • assisting people fill in forms, and beyond.

It’s great that there are so many different ideas about how AI could be helpful, but to get more collaboration from computer scientists and technologists, we need to have a clear set of goals, prioritizing among these tasks.

Ideally, this common task list would be organized around what is feasible and impactful for service providers and community members. This task list could attract more computer scientists to help us build, fine-tune, test, and Implement generative AI that can achieve these tasks.

Our group at Legal Design Lab is hard at work compiling this possible list of high-impact, lower-risk AI and Access to Justice tasks. We will be making it into a survey, and having as many people in the justice professional community rank which tasks would be the most impactful if AI could do them.

This prioritized task list will then be useful in getting more AI technologists and academic partners, to see if and how we can build these models, what benchmarks we should use to evaluate them, and how we can start doing limited, safety-focused pilots of them in the field.

Join our community!

Our group will be continuing to work on building a strong community around AI and access to justice, research projects, models, and interdisciplinary collaborations. Please stay in touch with us at this link, and sign up here to stay notified about what we’re doing.

Categories
AI + Access to Justice Current Projects

Presentation to Indiana Coalition for Court Access

On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access.

This presentation was part of a larger discussion about research projects, a learning community of judges, and evidence-based court policy and rules changes. What can courts, legal aid, groups, and statewide justice agencies be doing to best serve people with legal problems in their communities?


Margaret’s presentation covered the initial user research that the lab has been conducting, about how different members of the public think about AI platforms in regards to legal problem-solving and how they use these platforms to deal with problems like evictions. The presentation also spotlit the concerning trends, mistakes, and harms around public use of AI for legal problem-solving, which justice institutions and technology companies should focus on in order to prevent consumer harms while harnessing the opportunity of AI to help people understand the law and take action to resolve their legal problems.

The discussion after the presentation covered topics like:

  • Is there a way for justice actors to build a more authoritative legal info AI model, especially with key legal information about local laws and rights, court procedures and timelines, court forms, and service organizations contact details? This might help the AI platforms, avoid mistaken, information or hallucinated details.
  • How could researchers measure the benefits and harms of AI provided legal answers, compared to legal expert-provided legal answers, compared to no services at all? Aside from anecdotes and small samples, is there a more deliberate way to analyze the performance of AI platforms, when it comes to answering peoples questions about the law, procedures, forms, and services? This might include systematically measuring how often these platforms make mistakes, categorizing exactly what the mistakes are, and estimating, or measuring how much harm emerges from these mistakes. A similar deliberate protocol might be done for the benefits that these platforms provide.
Categories
AI + Access to Justice Current Projects

Interest Form signup for AI & Access to Justice

Are you a legal aid lawyer, court staff member, judge, academic, tech developer, computer science researcher, or community advocate interested in how AI might increase Access to Justice — and also what limits and accountability we must establish so that it is equitable, responsible, and human-centered?

Sign up at this interest form to stay in the loop with our work at Stanford Legal Design Lab on AI & Access to Justice.

We will be sending those on this list updates on:

  • Events that we will be running online and in person
  • Publications, research articles, and toolkits
  • Opportunities for partnerships, funding, and more
  • Requests for data-sharing, pilot initiatives, and other efforts

Please be in touch through the form — we look forward to connecting with you!

Categories
AI + Access to Justice Current Projects

Report a problem you’ve found with AI & legal help

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database.

We will be making this database available in the near-future, as we collect more records & review them.

For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:

  • incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
  • hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
  • irrelevant, distracting, or off-topic information;
  • misrepresentation of the law;
  • overly simplified information, that loses key nuance or cautions;
  • otherwise doing something that might be harmful to a person trying to get legal help.

You can send in any incidents you’ve experienced here at this form: https://airtable.com/apprz5bA7ObnwXEAd/shrQoNPeC7iVMxphp 

We will be reviewing submissions & making this incident database available in the future, for those interested.

Fill in the form to report an AI-Justice problem incident

Categories
AI + Access to Justice Current Projects

American Academy event on AI & Equitable Access to Legal Services

The Lab’s Margaret Hagan was a panelist at the May 2023 national event on AI & Access to Justice hosted by the American Academy of Arts & Sciences.

The event was called AI’s Implications for Equitable Access to Legal and Other Professional Services. It took place on May 10, 2023. Read more about the American Academy’s work on justice reform here.

More about the May event from the American Academy : “Increasingly capable AI tools like Chat GPT and Bing Chat will impact the accessibility, reliability, and regulation of legal and other professional services, like healthcare, for an underserved public. In this event, Jason Barnwell, Margaret Hagan, and Andrew M. Perlman discuss these and other implications of AI’s rapidly evolving capabilities.”‘

You can see a recording of the panel, that featured Jason Barnwell (Microsoft), Margaret Hagan (Stanford Legal Design Lab), and Andrew M. Perlman (Suffolk Law School).

Categories
AI + Access to Justice Class Blog Current Projects

AI Goes to Court: The Growing Landscape of AI for Access to Justice

By Jonah Wu

Student research fellow at Legal Design Lab, 2018-2019

1. Can AI help improve access to civil courts?

Civil court leaders have a newly strong interest in how artificial intelligence can improve the quality and efficiency of legal services in the justice system, especially for problems that self-represented litigants face [12345]. The promise is that artificial intelligence can address the fundamental crises in courts: that ordinary people are not able to use the system clearly or efficiently; that courts struggle to manage vast amounts of information; and that litigants and judicial officials often have to make complex decisions with little support.

If AI is able to gather and sift through vast troves of information, identify patterns, predict optimal strategies, detect anomalies, classify issues, and draft documents, the promise is that these capabilities could be harnessed for making the civil court system more accessible to people.

The question then, is how real these promises are, and how they are being implemented and evaluated. Now that early experimentation and agenda-setting have begun, the study of AI as a means for enhancing the quality of justice in the civil court system deserves greater definition. This paper surveys current applications of AI in the civil court context. It aims to lay a foundation for further case studies, observational studies, and shared documentation of AI for access to justice development research. It catalogs current projects, reflects on the constraints and infrastructure issues, and proposes an agenda for future development and research.

2. Background to the Rise of AI in the Legal System

When I use the term Artificial Intelligence, I distinguish it from general software applications that are used to input, track, and manage court information. Our basic criteria for AI-oriented projects is that the technology has capacity to perceive knowledge, make sense of data, generate predictions or decisions, translate information, or otherwise simulate intelligent behavior. AI does not include all court technology innovations. For example, I am not considering websites that broadcast information to the public; case or customer management systems that store information; or kiosks, apps, or mobile messages that communicate case information to litigants.

The discussion of AI in criminal courts is currently more robust than in civil courts. It has been proposed as a means to monitor and recognize defendants; support sentencing and bail decisions; and better assess evidence [3]. Because of the rapid rise of risk assessment AI in the setting of bail or sentencing, there has been more description and debate on AI [6]. There has been less focus on AI’s potential, or its concerns, in the civil justice system, including for family, housing, debt, employment, and consumer litigation. That said, there has been a robust discourse over the past 15 years of what technology applications and websites could be used by courts and legal aid groups to improve access to justice [7].

The current interest in AI for civil court improvements is in sync with a new abundance of data. As more courts have gathered data about administration, pleadings, litigant behavior, and decisions [1], it presents powerful opportunities for research and analytics in the courts, that can lead to greater efficiency and better design of services. Some groups have managed to use data to bring enormous new volumes of cases into the court system — like debt collection agencies, which have automated filings of cases against people for debt [8], often resulting in complaints that have missing or incorrect information and minimal, ineffective notice to defendants. If litigants like these can harness AI strategies to flood the court with cases, could the courts use its own AI strategies to manage and evaluate these cases and others — especially to better protect unwitting defendants against low-quality lawsuits?

The rise in interest in AI coincides with state courts experiencing economic pressure: budgets are cut, hours are reduced, and even some locations are closed [9]. Despite financial constraints, courts are expected to provide modern, digital, responsive services like in other consumer services. This presents a challenging expectation for the courts. How can they provide judicial services in sync with rapidly modernizing other service sectors — in finance, medicine, and other government bodies — within significant cost constraints? The promise of AI is that it can scale up quality services and improving efficiency, to improve performances and save costs [10].

A final background factor to consider is the growing concern over public perceptions of the judicial system. Yearly surveys indicate that communities find courts out of touch with the public, and with calls for greater empathy and engagement with “everyday people” [11]. Given that the mission of the court is to provide an avenue to lawful justice to constituents, if AI can help the court better achieve that mission without adding on averse risks, it would help the courts establish greater procedural and distributive justice for its litigants, and hopefully then bolster its legitimacy to the public and engagement with it.

3. What could be? Proposals in the Literature for AI for access to justice

What has the literature proposed on how AI techniques can address the access to justice crisis in civil courts? Over the past several decades, distinct use cases have been proposed for development. There is a mix of litigant-focused use cases (to help them understand the system and make stronger claims), and court-focused use cases (to help it improve its efficiency, consistency, transparency, and quality of services).

  • Answer a litigant’s questions about how the law applies to them. Computational law experts have proposed automated legal reasoning as a way to understand if a given case is in accordance with the law or not [12]. Court leaders also envision AI to help litigants conduct effective, direct research into how the law would apply to them [4,5]. Questions of how the law would apply to a given case lay on a spectrum of complexity. Questions that are more straightforwardly algorithmic (e.g., if a person exceeded a speed limit, or if a quantity or date is in an acceptable range) can be automated with little technical challenge [13]. Questions that have more qualitative standards, like whether it was reasonable, unconscionable foreseeable, or done in good faith, are not as easily automated — but they might be with greater work in deep learning and neural networks. Many propose that expert systems, or AI-powered chatbots might help litigants know their rights and make claims [14].
  • Analyze the quality of a legal claim and evidence. Several proposals are around making it easier to understand what has been submitted to court, and how a case has proceeded. Some exploratory work has pointed towards how AI could automatically classify a case docket, the chronological events in a case, in order that it could be understood computationally [15]. Machine learning could find patterns in claims and other legal filings, to indicate whether something has been argued well, whether the law supports it, and evaluate it versus competing claims [16].
  • Provide coordinated guidance for a person without a lawyer. Many have proposed focus on developing a holistic AI-based system to guide people without lawyers through the choices and procedure of a civil court case. One vision is of an advisory system that would help a person understand available forms of relief, helping them understand if they can meet the requirements, informing them of procedural requirements; and helping them to draft court documents [1718].
  • Predict and automate decisionmaking. Another proposal, discussed within the topic of online dispute resolution, is around how AI could either predict how a case will be decided (and thus give litigants a stronger understanding of their changes), or to actually generate a proposal of how a disputes should be settled [1920]. In this way, prediction of judicial decisions could be useful to access to justice. It could be integrated into online court platforms where people are exploring their legal options, or where they are entering and exchanging information in their case. The AI would help litigants to make better choices regarding how they file, and it would help courts expedite decision-making by either supporting or replacing human judges’ rulings.

4. What is happening so far? AI in action for access

With many proposals circulating about how AI might be applied for access to justice, where can we see these possibilities being developed and piloted with courts? Our initial survey identifies a handful of applications in action.

4.1. Predicting settlement arrangements, judicial decisions, and other outcomes of claims

One of the most robust areas of AI in access to justice work has been in developing applications to predict how a claim, case, or settlement will be resolved by a court. This area of predictive analytics has been demonstrated in many research projects, and in some cases have been integrated into court workflows.

In Australian Family Law courts, a team of artificial intelligence experts and lawyers have begun to develop Split-Up system, to use rules-based reasoning in concert with neural networks to predict outcomes for property disputes in divorce and other family law cases [21]. The Split Up system is used by judges to support their decision-making, by helping them to identify the assets of marriage that should be included in a settlement, and then establishing what percentage of the common pool each party should receive — which is a discretionary judicial choice based on factors including contributions, amount of resources, and future needs. The system incorporates 94 relevant factors to make its analysis, which uses neural network statistical techniques. The judge can then propose a final property order based on the system’s analysis. The system also seeks to make transparent explanations of its decision, so it uses Toulmin Argument structures to represent how it reached its predictions.

Researchers have created algorithms to predict Supreme Court and European Court of Human Rights decisions [222324]. They use natural language processing and machine learning to construct models that predict the courts’ decision with strong accuracy. Their predictions draw from the formal facts submitted in the case to identify what a likely outcome, and potentially even individual justices’ votes will be. This judicial decision prediction research can possibly used to offer predictive analytic tools to litigants, so they can better assess the strength of their claim and understand what outcomes they might face. Legal technology companies like Ravel and LexMachina [2526], claim that they can predict judges’ decision and case behavior, or the outcomes of an opposing party. The applications are mainly aimed at corporate-level litigation, rather than access to justice.

4.2. Detecting abuse and fraud against people the court oversees

Courts’ role in overseeing guardians and conservators means that they should be reducing financial exploitation of vulnerable people by those appointed to protect them. With particular concern for financial abuse of elderly by their conservators or guardians, a team in Utah began building an AI tool to identify likely fraud in the reported financial transactions that conservators or guardians submit to the court. The system, developed in concert with a Minnesota court system in a hackathon, would detect anomalies and fraud-related patterns, and send flag notifications to courts to investigate further [28].

4.3. Preventative Diagnosis of legal issues, matching to services, and automating relief

A robust branch of applications has been around using AI techniques to spot people’s legal needs (that they potentially did not know they had), and then either match them to a service provider or to automate a service for them, to help resolve their need. This approach has begun with the expungement use case — in which states have policies to help people clear their criminal record, but without widespread uptake. With this problem in mind, groups have developed AI programs to automatically flag who has a criminal record to clear, and then to streamline the expungement. help automate the expungement process for their region. In Maryland, Matthew Stubenberg from Maryland Volunteer Lawyers Service (now in Harvard’s A2J Lab) built a suite of tools to spot their organization’s clients’ problems, including overdue bills and criminal records that could be expunged. This tool helped legal aid attorneys diagnose their clients’ problems. Stubenberg also made the criminal record application public-facing, as MDExpungement, for anyone to automatically find if they have a criminal record and to submit a request to clear it [29].

Code for America is working inside courts to develop another AI application for expungement. They are work with the internal databases of California courts to automatically identify expunge eligible records, eliminating the need for individuals to apply for [30].

The authors, in partnership with researchers at Suffolk LIT Lab, are working on an AI application to automatically detect legal issues in people’s descriptions of their life problems, that they share in online forums, social media, and search queries [31]. This project involves labeling datasets of people’s problem stories, taken from Reddit and online virtual legal clinics, to then train a classifier to be able to automatically recognize what specific legal issue a person might have based on their story. This classifier could be used to power referral bots (that send people messages with local resources and agencies that could help them), or to translate people’s problem stories into actionable legal triage and advisory systems, as had been envisioned in the literature.

4.4. Analyzing quality of claims and citations

Considering how to help courts be more efficient in their analysis of claims and evidence, there are some applications — like the product Clerk from the company Judicata — that can read, analyze, and score submissions that people and lawyers make to the court [32]. These applications can assess the quality of a legal brief, to give clerks, judges, or litigants the ability to identify the source of the arguments, cross check them against the original, and possibly also find other related cases. In addition to improving the efficiency of analysis, the tool could be used for better drafting of submissions to the court — with litigants checking the quality of their pleadings before submitting them.

4.5. Active, intelligent case management

The Hebei High Court in China has reported the development of a smart court management AI, termed Intelligent Trial 1.0 system [33]. It automatically scans in and digitizes filings; it classifies documents into electronic files; it matches the parties to existing case parties; it identifies relevant laws, cases, and legal documents to be considered; it automatically generates all necessary court procedural documents like notices and seals; and it distributes cases to judges for them to be put on the right track. The system coordinates various AI tasks together into a workstream that can reduce court staff and judges’ workloads.

4.6. Online dispute resolution platforms and automated decision-making

Online dispute resolution platforms have grown around the United States, some of them using AI techniques to sort claims and propose settlements. Many ODR platforms do not use AI, but rather act as a collaboration and streamlining platform for litigants’ tasks. ODR platforms like Rechtwijzer, MyLaw BC, and the British Columbia Civil Resolution Tribunal, use some AI techniques to sort which people can use the platform to tackle a problem, and to automate decision-making and settlement or outcome proposal [34].

We also see new pilots of online dispute platforms in Australia, in the state of Victoria with its VCAT pilot for small claims (that is now in hiatus, awaiting future funding) — and in Utah, for its small claims in one place outside Salt Lake City.

These pilots are using platforms like Modria (part of Tyler Technology), Modron, or Matterhorn from Court Innovations. How much AI is part of these systems is not clear — it seems they are mainly platform for logging details and preferences, communicating between parties, and drafting/signing settlements (without any algorithm or AI tool making a decision proposal or crafting a strategy for parties). If the pilots are successful and become ongoing projects, then we can expect future iterations to possibly involve more AI-powered recommendations or decision tools.

5. Agenda for Development and Infrastructure of AI in access to justice

If an ecosystem of access to justice AI is to be accelerated, what is the agenda to guide the growth of projects? There is work to be done on the infrastructure of sharing data, defining ethics standards, security standards, and privacy policies. In addition, there is organizational and coalition-building work, to allow for more open innovation and cross-organization initiatives to grow.

5.1.Opening and standardizing datasets

Currently, the field of AI for access to justice is harmed by the lack of open, labeled datasets. Courts do hold relatively small datasets, but there are not standard protocols to make them available to the public or to researchers, nor are there labeled datasets to be used in training AI tools [35]. There are a few examples of labelled court datasets, like from the Board of Veterans Appeals [36]. A newly-announced US initiative, the National Court Open Data Standards Project, will promote standardization of existing court data, so that there can be more seamless sharing and cross-jurisdiction projects [37].

5.2.Making Policies to Manage Risks

There should be multi-stakeholder design of the infrastructure, to define an evolving set of guidance for issues around the following large risks that court administrators have identified as worries around new AI in courts [45].

  • Bias of possible Training Data Sets. Can we better spot, rectify, and condition inherent biases that the data sets might have, that we are using to train the new AI?
  • Lack of transparency of AI Tools. Can we create standard ways to communicate how an AI tool works, to ensure there is transparency to litigants, defendants, court staff, and others, so that there can be robust review of it?
  • Privacy of court users. Can we have standard redaction and privacy policies that prevent individuals’ sensitive information from being exposed [38]? There are several redaction software applications that use natural language processing to scan documents and automatically redact sensitive terms [3940].
  • New concerns for fairness. Will courts and the legal profession have to change how they define what ‘information versus advice’ is, as currently guide regulations about what types of technological help can be given to litigants? Also, if AI exposes patterns of arbitrary or biased decision-making in the courts, how will the courts respond to change personnel, organizational structures, or court procedures to better provide fairness?

For many of these policy questions, there are government-focused ethics initiatives that the justice system can learn from, as they define best practices and guiding principles for how to integrate AI responsibly into public, powerful institutions [424344].

6. Conclusion

This paper’s survey of proposals and applications for AI’s use for access to justice demonstrates how technology might be operationalized for social impact.

If there is more infrastructure-oriented work now, that establishes how courts can share data responsibly, and set new standards for privacy, transparency, fairness, and due process in regards to AI applications, this nascent set of projects may blossom into many more pilots over the next several years.

In a decade, there may be a full ecosystem of AI-powered courts, in which a person who faces a problem with eviction, credit card debt, child custody, or employment discrimination could have clear, affordable, efficient ways to use use the public civil justice system to resolve their problem. Especially with AI offering more preventative, holistic support to litigants, it might have anti-poverty effects as well, ensuring that the legal system resolves people’s potential life crises, rather than exacerbating them.