Categories
AI + Access to Justice Current Projects

Opportunities & Risks for AI, Legal Help, and Access to Justice

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.

Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?

Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?

The Stakeholder Session

In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.

The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.

An overview of what we covered in our stakeholder session with justice professionals.

The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.

  1. How AI could help professionals like legal aid or court staff improve their service offerings
  2. How AI could help community members & providers do legal problem-solving tasks
  3. How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.

Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently

The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:

  1. Helping experts create user-friendly guides to legal processes & rights
  2. Improving the speed & efficacy of tech tool development
  3. Strengthening providers’ ability to connect with clients & build a strong relationship
  4. Streamlining intake and referrals, and improving the creation of legal documents.

Within each of these zones, participants had many specific ideas.

Opportunities for legal aid & court staff to use AI to deliver better services

Opportunity 2: For People & Providers to Do Legal Tasks

The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:

  1. understanding legal rules and policies,
  2. identifying legal issues and directing a person to their menu of legal options,
  3. predicting likely outcomes and facilitating mutual resolutions,
  4. preparing legal documents and evidence, and
  5. aiding in the preparation for in-person presentations and negotiations.
How might AI help people understand their legal problem & navigate it to resolution?

Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.

Opportunity 3: For Org Leadership, Policymaking & Strategies

The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:

  1. training and supporting service providers more efficiently,
  2. optimizing business processes and resource allocation, and
  3. helping leaders identify policy issues and create impactful strategies.
AI opportunities to help justice system leaders

Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.

Possible Risks & Harms of AI in Civil Justice

While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.

The whiteboard of professionals’ 1st round of brainstorming about possible risks to mitigate for a future of AI in the civil justice system

This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.

Infrastructure & initiatives to prioritize now

Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?

The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include

  • ethical data sharing and model building protocols,
  • the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
  • the establishment of benchmarks to measure the performance of AI in legal use cases,
  • the adoption of ethical and professional rules for AI use,
  • recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
  • the creation of guides for litigants and policymakers on using AI for legal help.

Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.

We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.

Categories
AI + Access to Justice Current Projects

American Academy event on AI & Equitable Access to Legal Services

The Lab’s Margaret Hagan was a panelist at the May 2023 national event on AI & Access to Justice hosted by the American Academy of Arts & Sciences.

The event was called AI’s Implications for Equitable Access to Legal and Other Professional Services. It took place on May 10, 2023. Read more about the American Academy’s work on justice reform here.

More about the May event from the American Academy : “Increasingly capable AI tools like Chat GPT and Bing Chat will impact the accessibility, reliability, and regulation of legal and other professional services, like healthcare, for an underserved public. In this event, Jason Barnwell, Margaret Hagan, and Andrew M. Perlman discuss these and other implications of AI’s rapidly evolving capabilities.”‘

You can see a recording of the panel, that featured Jason Barnwell (Microsoft), Margaret Hagan (Stanford Legal Design Lab), and Andrew M. Perlman (Suffolk Law School).