Categories
AI + Access to Justice Current Projects

Report a problem you’ve found with AI & legal help

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database.

We will be making this database available in the near-future, as we collect more records & review them.

For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:

  • incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
  • hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
  • irrelevant, distracting, or off-topic information;
  • misrepresentation of the law;
  • overly simplified information, that loses key nuance or cautions;
  • otherwise doing something that might be harmful to a person trying to get legal help.

You can send in any incidents you’ve experienced here at this form: https://airtable.com/apprz5bA7ObnwXEAd/shrQoNPeC7iVMxphp 

We will be reviewing submissions & making this incident database available in the future, for those interested.

Fill in the form to report an AI-Justice problem incident

Categories
AI + Access to Justice Current Projects

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.

There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).

See more about the conference at the main JURIX 23 website.

About the AI & A2J workshop

This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.

Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.

What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?

At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:

  • Identify the key challenges and opportunities for using AI to improve access to justice.
  • Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
  • Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
  • Develop proposals for how to hold AI models accountable for the public good.

Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.

Organizers of the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).

Are you generally interested in AI & Access to Justice? Sign up for our Stanford Legal Design Lab AI-A2J interest list to stay in touch.

Submit a paper to the AI & A2J Workshop

We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.

Workshop submissions may be about the topics described above, including:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Deadline extended to November 20, 2023

Submission Link: Submit your 4-12 page paper here: https://easychair.org/my/conference?conf=jurixaia2j

Notification: November 28, 2023

Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands

More about the JURIX Conference

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.

The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.


Categories
AI + Access to Justice Current Projects

AI Platforms & Privacy Protection through Legal Design

How can regulators, researchers, and tech companies proactively protect people’s rights & privacy, even as AI becomes more ubiquitous so quickly?

by Margaret Hagan, originally published at Legal Design & Innovation

This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe.

I was at a table focused on the intersection of Legal Design, AI platforms, and privacy protections.

Our multidisciplinary group spent several hours getting concrete: what are the scenarios and user needs around privacy & AI platforms? What are the main concerns and design challenges?

We then moved towards an initial brainstorm. What ideas for interventions, infrastructure, or processes could help move AI platforms towards greater privacy protections — and avoid privacy problems that have arisen in similar technology platform advancements in the recent past? What could we learn from privacy challenges, solutions, and failures that came with the rise of websites on the open Internet, the advancement of search engines, and the popular use of social media platforms?

Our group circled around some promising, exciting ideas for cross-Atlantic collaboration. Here is a short recap of them.

Learning from the User Burdens of Privacy Pop-ups & Cookie Banners

Can we avoid putting so many burdens on the user, like with cookie banners and privacy pop-ups on every website? We can learn from the current crop of privacy protections, which warn European visitors when they open any new website and require them to read, choose, and click through pop-up menus about cookies, privacy, and more. What are ways that we can lower these user burdens and privacy burn-out interfaces?

Smart AI privacy warnings, woven into interactions

Can the AI be smart enough to respond with warnings when people are crossing into a high-risk area? Perhaps instead of generalized warnings about privacy implications — a conversational AI agent can let a person know when they are sharing data/asking for information that has a higher risk of harm. This might be when a person asks a question about their health, finances, personal security, divorce/custody, domestic violence, or another topic that could have damaging consequences to them if others (their family members, financial institutions, law enforcement, insurance companies, or other third parties) found out. The AI could be programmed to be privacy-protective, to easily let a person choose at the moment about whether to take the risk of sharing this sensitive data, to help a person understand the risks in this specific domain, and to help the person delete or manage their privacy for this particular interaction.

Choosing the Right Moment for Privacy Warnings & Choices

Can warnings and choices around privacy come during the ‘right moment’? Perhaps it’s not best to warn people before they sign up for a service, or even right when they are logging on. This is typically when people are most hungry for AI interaction & information. They don’t want to be distracted. Rather, can the warning, choices, and settings come during the interactions — or after it? A user is likely to have ‘buyer’s remorse’ with AI platforms: did I overshare? Who can see what I just shared? Could someone find out what I talked about with the AI? How can privacy terms & controls be easily accessible right when people need it, usually during these “clean up” moments?

Conducting More Varied User Research about AI & Protections

We need more user research in different cultures and demographics about how people use AI, relate to it, and critique it (or do not). To figure out how to develop privacy protections, warning/disclosure designs, and other techno-policy solutions, first we need a deeper understanding of various AI users, their needs and preferences, and their willingness to engage with different kinds of protections.

Building an International Network Working on AI & Privacy Protections

Could we have anchor universities, with strong computer science, policy, and law departments, that host workshops and training on the ethical development of AI platforms? These could help bring future technology leaders into cross-disciplinary contact with people from policy and law, to learn about social good matters like privacy. These cross-disciplinary groups could also help policy & law experts learn how to integrate their principles and research into more technical form, like by developing labeled datasets and model benchmarks.—Are you interested in ensuring there is privacy built into AI platforms? Are you working on user, technical, or policy research on what the privacy scenarios, needs, risks, and solutions might be on AI platforms? Please be in touch!Thank you to Dr. Monica Palmirani for leading the Legal Design group at the State of Privacy event, at the lovely Museo Nazionale Etrusco di Villa Giulia in Rome.

Categories
AI + Access to Justice Current Projects

AI & Legal Help Crossover Event with computer scientists and lawyers

In July, an interdisciplinary group of researchers at Stanford hosted the “AI and Legal Help Crossover” event, for stakeholders from the civil justice system and computer science to meet, talk, and identify promising next steps to advance the responsible development of AI for improving the justice system.

This builds off of our Spring workshop, co-hosted with the Self-Represented Litigants Network, that led justice professionals through a brainstorm of how AI could help them and their clients around access to justice.

Here are 3 topic areas that arose out of this workshop, that we’re excited to work on more in the future!

Topic 1: Next Steps for advancing AI & Justice work

These are the activities that participants highlighted as valuable for

Events that dive into AI applications, research, and evaluation in specific areas of the justice system. For example, could we hold meetings that focus in on specific topics, like:

  • High volume, quick proceedings like for Debt, Traffic, Parking, and Eviction. These case types might have similar dynamics, processes, and litigant needs.
    • What are the ideas for applications that could improve the quality of justice and outcomes in these areas?
    • What kinds of research, datasets, and protocols might be done in these areas in particular, that would matter to policymakers, service providers, or communities?
  • Innovation hot areas like Eviction Diversion and Criminal Justice Diversion, where there already are many pilots happening to improve outcomes. If there is already energy to pilot new interventions in a particular area, can we amplify these with AI?

Local Justice/AI R&D Community-building, to have regional hubs in areas where there are anchor institutions with AI research/development capacity & those with justice system expertise. Can we have a network of local groups who are working on improving AI development & research? And where local experts in computer science can learn about the opportunities for work with justice system actors — so that they are informed & connected to do relevant work.

Index of Justice/AI Research, Datasets, Pilots, and Partners, so that more people new to this area (both from technical and legal backgrounds) can see what is happening, build relationships, and collaborate with each other.

Domain Expert Community meetings that could attract more legal aid lawyers, self-help court staff, clerks, navigators, judicial officers, and those who have on-the-ground experience with helping litigants through the court system. Could we start gathering and standardizing their expertise — into more formal benchmarks, rating scales, and evaluation protocols?

Unauthorized Practice of Law & Regulatory Discussions to talk through where legal professional rules might play out with AI tools – -and how they might be interpreted or adapted to best protect people from harm while benefiting people with increased access and empowerment.

National Group Leadership and Support, in which professional groups and consortia like the Legal Services Corporation, State Justice Institute Joint Technology Committee, CiTOC, Bureau of Justice Statistics, Department of Justice, or National Center of State Courts could help:

  • Define an agenda for projects, research, and evaluation needs
  • Encourage groups to standardize data and make it available
  • Call for pilots and partnerships, and help with the matchmaking of researchers, developers, and courts/legal aid groups
  • Incentivize pilots and evaluation with funding dedicated to human-centered AI for justice

Topic 2: Tasks where AI might help with justice systems research. 

We grouped the ideas for applications of AI in the justice system into some themes. These resonate with our earlier workshop on ideas for AI in the justice sector, that we held with the Self Represented Litigation Network:

  • Litigant Empowerment applications
  • Service Improvement applications
  • Accountability applications
  • Research & System Design applications

Litigant Empowerment themes

  • AI for litigant decision-making, to help a person understand possible outcomes that may result from a certain claim, defense, or choice they make in the justice system. It could help them be more strategic with their choices, wording, etc. 
  • AI to craft better claims and arguments so that litigants or their advocates could understand the strongest claims, arguments, citations, and evidence to use. 
  • AI for narratives and document completion, to help a litigant quickly from their summary of their facts and experiences to a properly formatted and written court filing.
  • AI for legalese to plain language translation, that could help a person understand a notice, contract, court order, or other legal document they receive.

Service Improvement themes

  • AI for legal aid or court centers to intake/triage users to the right issue area, level of service, and resources they can use.
  • AI for chat and coaching, to package experts’ knowledge about following a court process, filling in a form, preparing for a hearing, or other legal tasks.

Accountability themes

  • AI to spot policy/advocacy targets, where legal aid advocates, attorney general offices, or journalists could see which courts or judges might have issues with the quality of justice in their proceedings, where more training or advocacy for change might be needed.
  • AI to spot fraud, bad practices, and concerning trends. For example, can it scan petitions being filed in debt cases to flag to clerks where the dates, amounts, or claims mean that the claim is not valid? Or can it look through settlement agreements in housing or debt cases to find concerning terms or dynamics?

Research & System Design themes

  • AI to understand where processes need simplification, or where systems need to be reformed. They could understand through user error rates, continuances, low participation rates, or other factors which parts of the justice system are the least accessible — and where rules, services, and technology needs reform.
  • AI for understanding the court’s performance, to see what is happening not only in the case-level data but also at the document-level. This could give much more substance to what processes and outcomes people are experiencing.

Topic 3: Data that will be important to make progress on Justice & AI

Legal Procedure, Services, and Form data, that has been vetted by experts and approved as up to date. This data might then train a model of ‘reliable, authoritative’ legal information for each jurisdiction about what a litigant should know when dealing with a certain legal problem.

  • Could researchers work with the LSC and local legal aid & court groups that maintain self-help content (legal help websites, procedural guides, forms, etc.) to gather this local procedural information – -and then build a model that can deliver high-quality, jurisdiction-specific procedural guidance?

Court Document data, that includes the substance of pleadings, settlements, and judgments. Access to datasets with substantive data about the claims litigants are making, the terms they agree to, and the outcomes in judgments can give needed information for research about the court, and also AI tools for litigants & service providers that analyze, synthesize, and predict.

  • Could courts partner with researchers to make filings and settlement documents available, in an ethical/privacy-friendly way? 

Domain Expert data in which they help rate or label legal data. What is good or bad? What is helpful or harmful? Building successful AI pilots will need input and quality control from domain experts — especially those who see how legal documents, processes, and services play out in practice. 

  • Could justice groups & universities help bring legal experts together to help define standards, label datasets, and give input on the quality of models’ output? What are the structures, incentives, and compensation needed to get legal experts more involved in this?
Categories
AI + Access to Justice Current Projects

Opportunities & Risks for AI, Legal Help, and Access to Justice

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.

Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?

Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?

The Stakeholder Session

In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.

The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.

An overview of what we covered in our stakeholder session with justice professionals.

The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.

  1. How AI could help professionals like legal aid or court staff improve their service offerings
  2. How AI could help community members & providers do legal problem-solving tasks
  3. How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.

Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently

The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:

  1. Helping experts create user-friendly guides to legal processes & rights
  2. Improving the speed & efficacy of tech tool development
  3. Strengthening providers’ ability to connect with clients & build a strong relationship
  4. Streamlining intake and referrals, and improving the creation of legal documents.

Within each of these zones, participants had many specific ideas.

Opportunities for legal aid & court staff to use AI to deliver better services

Opportunity 2: For People & Providers to Do Legal Tasks

The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:

  1. understanding legal rules and policies,
  2. identifying legal issues and directing a person to their menu of legal options,
  3. predicting likely outcomes and facilitating mutual resolutions,
  4. preparing legal documents and evidence, and
  5. aiding in the preparation for in-person presentations and negotiations.
How might AI help people understand their legal problem & navigate it to resolution?

Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.

Opportunity 3: For Org Leadership, Policymaking & Strategies

The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:

  1. training and supporting service providers more efficiently,
  2. optimizing business processes and resource allocation, and
  3. helping leaders identify policy issues and create impactful strategies.
AI opportunities to help justice system leaders

Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.

Possible Risks & Harms of AI in Civil Justice

While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.

The whiteboard of professionals’ 1st round of brainstorming about possible risks to mitigate for a future of AI in the civil justice system

This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.

Infrastructure & initiatives to prioritize now

Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?

The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include

  • ethical data sharing and model building protocols,
  • the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
  • the establishment of benchmarks to measure the performance of AI in legal use cases,
  • the adoption of ethical and professional rules for AI use,
  • recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
  • the creation of guides for litigants and policymakers on using AI for legal help.

Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.

We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.

Categories
AI + Access to Justice Current Projects

American Academy event on AI & Equitable Access to Legal Services

The Lab’s Margaret Hagan was a panelist at the May 2023 national event on AI & Access to Justice hosted by the American Academy of Arts & Sciences.

The event was called AI’s Implications for Equitable Access to Legal and Other Professional Services. It took place on May 10, 2023. Read more about the American Academy’s work on justice reform here.

More about the May event from the American Academy : “Increasingly capable AI tools like Chat GPT and Bing Chat will impact the accessibility, reliability, and regulation of legal and other professional services, like healthcare, for an underserved public. In this event, Jason Barnwell, Margaret Hagan, and Andrew M. Perlman discuss these and other implications of AI’s rapidly evolving capabilities.”‘

You can see a recording of the panel, that featured Jason Barnwell (Microsoft), Margaret Hagan (Stanford Legal Design Lab), and Andrew M. Perlman (Suffolk Law School).

Categories
AI + Access to Justice Class Blog Current Projects

AI Goes to Court: The Growing Landscape of AI for Access to Justice

By Jonah Wu

Student research fellow at Legal Design Lab, 2018-2019

1. Can AI help improve access to civil courts?

Civil court leaders have a newly strong interest in how artificial intelligence can improve the quality and efficiency of legal services in the justice system, especially for problems that self-represented litigants face [12345]. The promise is that artificial intelligence can address the fundamental crises in courts: that ordinary people are not able to use the system clearly or efficiently; that courts struggle to manage vast amounts of information; and that litigants and judicial officials often have to make complex decisions with little support.

If AI is able to gather and sift through vast troves of information, identify patterns, predict optimal strategies, detect anomalies, classify issues, and draft documents, the promise is that these capabilities could be harnessed for making the civil court system more accessible to people.

The question then, is how real these promises are, and how they are being implemented and evaluated. Now that early experimentation and agenda-setting have begun, the study of AI as a means for enhancing the quality of justice in the civil court system deserves greater definition. This paper surveys current applications of AI in the civil court context. It aims to lay a foundation for further case studies, observational studies, and shared documentation of AI for access to justice development research. It catalogs current projects, reflects on the constraints and infrastructure issues, and proposes an agenda for future development and research.

2. Background to the Rise of AI in the Legal System

When I use the term Artificial Intelligence, I distinguish it from general software applications that are used to input, track, and manage court information. Our basic criteria for AI-oriented projects is that the technology has capacity to perceive knowledge, make sense of data, generate predictions or decisions, translate information, or otherwise simulate intelligent behavior. AI does not include all court technology innovations. For example, I am not considering websites that broadcast information to the public; case or customer management systems that store information; or kiosks, apps, or mobile messages that communicate case information to litigants.

The discussion of AI in criminal courts is currently more robust than in civil courts. It has been proposed as a means to monitor and recognize defendants; support sentencing and bail decisions; and better assess evidence [3]. Because of the rapid rise of risk assessment AI in the setting of bail or sentencing, there has been more description and debate on AI [6]. There has been less focus on AI’s potential, or its concerns, in the civil justice system, including for family, housing, debt, employment, and consumer litigation. That said, there has been a robust discourse over the past 15 years of what technology applications and websites could be used by courts and legal aid groups to improve access to justice [7].

The current interest in AI for civil court improvements is in sync with a new abundance of data. As more courts have gathered data about administration, pleadings, litigant behavior, and decisions [1], it presents powerful opportunities for research and analytics in the courts, that can lead to greater efficiency and better design of services. Some groups have managed to use data to bring enormous new volumes of cases into the court system — like debt collection agencies, which have automated filings of cases against people for debt [8], often resulting in complaints that have missing or incorrect information and minimal, ineffective notice to defendants. If litigants like these can harness AI strategies to flood the court with cases, could the courts use its own AI strategies to manage and evaluate these cases and others — especially to better protect unwitting defendants against low-quality lawsuits?

The rise in interest in AI coincides with state courts experiencing economic pressure: budgets are cut, hours are reduced, and even some locations are closed [9]. Despite financial constraints, courts are expected to provide modern, digital, responsive services like in other consumer services. This presents a challenging expectation for the courts. How can they provide judicial services in sync with rapidly modernizing other service sectors — in finance, medicine, and other government bodies — within significant cost constraints? The promise of AI is that it can scale up quality services and improving efficiency, to improve performances and save costs [10].

A final background factor to consider is the growing concern over public perceptions of the judicial system. Yearly surveys indicate that communities find courts out of touch with the public, and with calls for greater empathy and engagement with “everyday people” [11]. Given that the mission of the court is to provide an avenue to lawful justice to constituents, if AI can help the court better achieve that mission without adding on averse risks, it would help the courts establish greater procedural and distributive justice for its litigants, and hopefully then bolster its legitimacy to the public and engagement with it.

3. What could be? Proposals in the Literature for AI for access to justice

What has the literature proposed on how AI techniques can address the access to justice crisis in civil courts? Over the past several decades, distinct use cases have been proposed for development. There is a mix of litigant-focused use cases (to help them understand the system and make stronger claims), and court-focused use cases (to help it improve its efficiency, consistency, transparency, and quality of services).

  • Answer a litigant’s questions about how the law applies to them. Computational law experts have proposed automated legal reasoning as a way to understand if a given case is in accordance with the law or not [12]. Court leaders also envision AI to help litigants conduct effective, direct research into how the law would apply to them [4,5]. Questions of how the law would apply to a given case lay on a spectrum of complexity. Questions that are more straightforwardly algorithmic (e.g., if a person exceeded a speed limit, or if a quantity or date is in an acceptable range) can be automated with little technical challenge [13]. Questions that have more qualitative standards, like whether it was reasonable, unconscionable foreseeable, or done in good faith, are not as easily automated — but they might be with greater work in deep learning and neural networks. Many propose that expert systems, or AI-powered chatbots might help litigants know their rights and make claims [14].
  • Analyze the quality of a legal claim and evidence. Several proposals are around making it easier to understand what has been submitted to court, and how a case has proceeded. Some exploratory work has pointed towards how AI could automatically classify a case docket, the chronological events in a case, in order that it could be understood computationally [15]. Machine learning could find patterns in claims and other legal filings, to indicate whether something has been argued well, whether the law supports it, and evaluate it versus competing claims [16].
  • Provide coordinated guidance for a person without a lawyer. Many have proposed focus on developing a holistic AI-based system to guide people without lawyers through the choices and procedure of a civil court case. One vision is of an advisory system that would help a person understand available forms of relief, helping them understand if they can meet the requirements, informing them of procedural requirements; and helping them to draft court documents [1718].
  • Predict and automate decisionmaking. Another proposal, discussed within the topic of online dispute resolution, is around how AI could either predict how a case will be decided (and thus give litigants a stronger understanding of their changes), or to actually generate a proposal of how a disputes should be settled [1920]. In this way, prediction of judicial decisions could be useful to access to justice. It could be integrated into online court platforms where people are exploring their legal options, or where they are entering and exchanging information in their case. The AI would help litigants to make better choices regarding how they file, and it would help courts expedite decision-making by either supporting or replacing human judges’ rulings.

4. What is happening so far? AI in action for access

With many proposals circulating about how AI might be applied for access to justice, where can we see these possibilities being developed and piloted with courts? Our initial survey identifies a handful of applications in action.

4.1. Predicting settlement arrangements, judicial decisions, and other outcomes of claims

One of the most robust areas of AI in access to justice work has been in developing applications to predict how a claim, case, or settlement will be resolved by a court. This area of predictive analytics has been demonstrated in many research projects, and in some cases have been integrated into court workflows.

In Australian Family Law courts, a team of artificial intelligence experts and lawyers have begun to develop Split-Up system, to use rules-based reasoning in concert with neural networks to predict outcomes for property disputes in divorce and other family law cases [21]. The Split Up system is used by judges to support their decision-making, by helping them to identify the assets of marriage that should be included in a settlement, and then establishing what percentage of the common pool each party should receive — which is a discretionary judicial choice based on factors including contributions, amount of resources, and future needs. The system incorporates 94 relevant factors to make its analysis, which uses neural network statistical techniques. The judge can then propose a final property order based on the system’s analysis. The system also seeks to make transparent explanations of its decision, so it uses Toulmin Argument structures to represent how it reached its predictions.

Researchers have created algorithms to predict Supreme Court and European Court of Human Rights decisions [222324]. They use natural language processing and machine learning to construct models that predict the courts’ decision with strong accuracy. Their predictions draw from the formal facts submitted in the case to identify what a likely outcome, and potentially even individual justices’ votes will be. This judicial decision prediction research can possibly used to offer predictive analytic tools to litigants, so they can better assess the strength of their claim and understand what outcomes they might face. Legal technology companies like Ravel and LexMachina [2526], claim that they can predict judges’ decision and case behavior, or the outcomes of an opposing party. The applications are mainly aimed at corporate-level litigation, rather than access to justice.

4.2. Detecting abuse and fraud against people the court oversees

Courts’ role in overseeing guardians and conservators means that they should be reducing financial exploitation of vulnerable people by those appointed to protect them. With particular concern for financial abuse of elderly by their conservators or guardians, a team in Utah began building an AI tool to identify likely fraud in the reported financial transactions that conservators or guardians submit to the court. The system, developed in concert with a Minnesota court system in a hackathon, would detect anomalies and fraud-related patterns, and send flag notifications to courts to investigate further [28].

4.3. Preventative Diagnosis of legal issues, matching to services, and automating relief

A robust branch of applications has been around using AI techniques to spot people’s legal needs (that they potentially did not know they had), and then either match them to a service provider or to automate a service for them, to help resolve their need. This approach has begun with the expungement use case — in which states have policies to help people clear their criminal record, but without widespread uptake. With this problem in mind, groups have developed AI programs to automatically flag who has a criminal record to clear, and then to streamline the expungement. help automate the expungement process for their region. In Maryland, Matthew Stubenberg from Maryland Volunteer Lawyers Service (now in Harvard’s A2J Lab) built a suite of tools to spot their organization’s clients’ problems, including overdue bills and criminal records that could be expunged. This tool helped legal aid attorneys diagnose their clients’ problems. Stubenberg also made the criminal record application public-facing, as MDExpungement, for anyone to automatically find if they have a criminal record and to submit a request to clear it [29].

Code for America is working inside courts to develop another AI application for expungement. They are work with the internal databases of California courts to automatically identify expunge eligible records, eliminating the need for individuals to apply for [30].

The authors, in partnership with researchers at Suffolk LIT Lab, are working on an AI application to automatically detect legal issues in people’s descriptions of their life problems, that they share in online forums, social media, and search queries [31]. This project involves labeling datasets of people’s problem stories, taken from Reddit and online virtual legal clinics, to then train a classifier to be able to automatically recognize what specific legal issue a person might have based on their story. This classifier could be used to power referral bots (that send people messages with local resources and agencies that could help them), or to translate people’s problem stories into actionable legal triage and advisory systems, as had been envisioned in the literature.

4.4. Analyzing quality of claims and citations

Considering how to help courts be more efficient in their analysis of claims and evidence, there are some applications — like the product Clerk from the company Judicata — that can read, analyze, and score submissions that people and lawyers make to the court [32]. These applications can assess the quality of a legal brief, to give clerks, judges, or litigants the ability to identify the source of the arguments, cross check them against the original, and possibly also find other related cases. In addition to improving the efficiency of analysis, the tool could be used for better drafting of submissions to the court — with litigants checking the quality of their pleadings before submitting them.

4.5. Active, intelligent case management

The Hebei High Court in China has reported the development of a smart court management AI, termed Intelligent Trial 1.0 system [33]. It automatically scans in and digitizes filings; it classifies documents into electronic files; it matches the parties to existing case parties; it identifies relevant laws, cases, and legal documents to be considered; it automatically generates all necessary court procedural documents like notices and seals; and it distributes cases to judges for them to be put on the right track. The system coordinates various AI tasks together into a workstream that can reduce court staff and judges’ workloads.

4.6. Online dispute resolution platforms and automated decision-making

Online dispute resolution platforms have grown around the United States, some of them using AI techniques to sort claims and propose settlements. Many ODR platforms do not use AI, but rather act as a collaboration and streamlining platform for litigants’ tasks. ODR platforms like Rechtwijzer, MyLaw BC, and the British Columbia Civil Resolution Tribunal, use some AI techniques to sort which people can use the platform to tackle a problem, and to automate decision-making and settlement or outcome proposal [34].

We also see new pilots of online dispute platforms in Australia, in the state of Victoria with its VCAT pilot for small claims (that is now in hiatus, awaiting future funding) — and in Utah, for its small claims in one place outside Salt Lake City.

These pilots are using platforms like Modria (part of Tyler Technology), Modron, or Matterhorn from Court Innovations. How much AI is part of these systems is not clear — it seems they are mainly platform for logging details and preferences, communicating between parties, and drafting/signing settlements (without any algorithm or AI tool making a decision proposal or crafting a strategy for parties). If the pilots are successful and become ongoing projects, then we can expect future iterations to possibly involve more AI-powered recommendations or decision tools.

5. Agenda for Development and Infrastructure of AI in access to justice

If an ecosystem of access to justice AI is to be accelerated, what is the agenda to guide the growth of projects? There is work to be done on the infrastructure of sharing data, defining ethics standards, security standards, and privacy policies. In addition, there is organizational and coalition-building work, to allow for more open innovation and cross-organization initiatives to grow.

5.1.Opening and standardizing datasets

Currently, the field of AI for access to justice is harmed by the lack of open, labeled datasets. Courts do hold relatively small datasets, but there are not standard protocols to make them available to the public or to researchers, nor are there labeled datasets to be used in training AI tools [35]. There are a few examples of labelled court datasets, like from the Board of Veterans Appeals [36]. A newly-announced US initiative, the National Court Open Data Standards Project, will promote standardization of existing court data, so that there can be more seamless sharing and cross-jurisdiction projects [37].

5.2.Making Policies to Manage Risks

There should be multi-stakeholder design of the infrastructure, to define an evolving set of guidance for issues around the following large risks that court administrators have identified as worries around new AI in courts [45].

  • Bias of possible Training Data Sets. Can we better spot, rectify, and condition inherent biases that the data sets might have, that we are using to train the new AI?
  • Lack of transparency of AI Tools. Can we create standard ways to communicate how an AI tool works, to ensure there is transparency to litigants, defendants, court staff, and others, so that there can be robust review of it?
  • Privacy of court users. Can we have standard redaction and privacy policies that prevent individuals’ sensitive information from being exposed [38]? There are several redaction software applications that use natural language processing to scan documents and automatically redact sensitive terms [3940].
  • New concerns for fairness. Will courts and the legal profession have to change how they define what ‘information versus advice’ is, as currently guide regulations about what types of technological help can be given to litigants? Also, if AI exposes patterns of arbitrary or biased decision-making in the courts, how will the courts respond to change personnel, organizational structures, or court procedures to better provide fairness?

For many of these policy questions, there are government-focused ethics initiatives that the justice system can learn from, as they define best practices and guiding principles for how to integrate AI responsibly into public, powerful institutions [424344].

6. Conclusion

This paper’s survey of proposals and applications for AI’s use for access to justice demonstrates how technology might be operationalized for social impact.

If there is more infrastructure-oriented work now, that establishes how courts can share data responsibly, and set new standards for privacy, transparency, fairness, and due process in regards to AI applications, this nascent set of projects may blossom into many more pilots over the next several years.

In a decade, there may be a full ecosystem of AI-powered courts, in which a person who faces a problem with eviction, credit card debt, child custody, or employment discrimination could have clear, affordable, efficient ways to use use the public civil justice system to resolve their problem. Especially with AI offering more preventative, holistic support to litigants, it might have anti-poverty effects as well, ensuring that the legal system resolves people’s potential life crises, rather than exacerbating them.

Categories
AI + Access to Justice Current Projects Triage and Diagnosis

Houston.ai access AI

Legal Server has a project Houston.AI, a new set of tools that allows for smarter intake of people, finding of their issues, and referring them to the right support.

What?

Houston.AI is a web-based platform designed to help non-profit legal aid agencies more effectively serve those who cannot afford attorneys. Comprised of a series of micro-services leveraging machine learning, artificial intelligence and expert systems Houston.AI is designed to perform many of the simple and routine tasks that lawyers do throughout their day to serve clients.

Such services include:

  • Legal Issue Spotting
  • Entity Extraction
  • Document Analysis (using Computer Vision)
  • Tonal Analysis
  • Expert Systems to Analyze potential defenses or potential remedies
  • Attorney Necessity Scaling
  • Predictive Analytics (time and outcomes)
  • Intelligent Routing of Cases to Agencies or Attorneys (based on Open Referral)

Why?

In our war to provide meaningful access to justice, it is unrealistic to think that the current army of lawyers devoted to this cause could possibly address the overwhelming legal needs of the most vulnerable and underserved among us without a huge infusion of government funding, a highly unlikely scenario in today’s climate. As such, we must significantly change our strategy on the frontlines to exploit advances in technology.

Simply put: for many of the necessary but routine tasks that lawyers do every day, humans are too slow and too few compared to autonomous machines.

Achieving success on the (asymmetrical) battlefield requires careful coordination between generals (human lawyers) and a cavalry of autonomous foot soldiers (high-speed artificial intelligence applications, leveraging continuously advancing algorithms). In this sense, machine learning as envisioned by this project, is analogous to West Point, preparing and training Justice Bots to help individuals overcome access issues that so pervade the American judicial system.

In the end, these on-demand intelligent resources will allow lawyers to practice at the top of their license (i.e., in their highest and best roles as counselors and advocates) in a far more efficient and effective way, all the while empowering those in need through increased access and equipping them to make better choices, which is to everyone’s benefit.