Categories
AI + Access to Justice Current Projects

Jurix ’24 AI + A2J Schedule

On December 11, 2024, in Brno, Czechia & online, we held our second annual AI for Access to Justice Workshop at the JURIX Conference.

The academic workshop is organized by Quinten Steenhuis, Suffolk University Law School/LIT Lab, Margaret Hagan, Stanford Law School/ Legal Design Lab, and Hannes Westermann, Maastricht University Faculty of Law.

In Autumn 2024, there was a very competitive application process, and 22 papers and 5 demo’s were selected.

The following presentations all come with a 10-page research paper or a shorter paper for the demo’s. The accepted paper drafts are available at this Google Drive folder.

Thank you to all of the contributors and participants in the workshop!

Session 1: AI for A2J Planning – Risks, Limits, Strategies

  • LLMs & Legal Aid: Understanding Legal Needs Exhibited Through User Queries: Michal Kuk and Jakub Harašta
  • Spreading the Risk of Scalable Legal Services: The Role of Insurance in Expanding Access to Justice, David Chriki, Harel Omer and Roee Amir
  • Exploring the potential and limitations of AI to enhance children’s access to justice, Boglárka Jánoskúti Dr. and Dóra Kiss Dr.
  • Health Insurance Coverage Rules Interpretation Corpus: Law, Policy, and Medical Guidance for Health Insurance Coverage Understanding, Mike Gartner

Session 2: AI for Legal Aid Services – Part A

  • Utilizing Large Language Models for Legal Aid Triage, Amit Haim and Christoph Engel
  • Measuring What Matters: Developing Human-Centered Legal Q-and-A Quality Standards through Multi-Stakeholder Research, Margaret Hagan
  • Demo: Digital Transformation in Child and Youth Welfare: A Concept for Implementing a Web-based Counseling Assistant Florian Gerlach

Session 3: AI for Legal Aid Services – Part B

  • Demo: Green Advice: Using RAG for Actionable Legal Information, Repairosaurus Rex , Nicholas Burka, Ali Cook, Sam Flynn, Sateesh Nori
  • Demo: ​​Inclusive AI design for justice in low-literacy environments, Avanti Durani and Shivani Sathe
  • Managing Administrative Law Cases using an Adaptable Model-driven Norm-enforcing Tool, Marten Steketee, Nina Verheijen and L. Thomas van Binsbergen
  • A Legal Advisor Bot Towards Access to Justice: Adam Kaczmarczyk, Tomer Libal and Aleksander Smywiński-Pohl
  • Electrified Apprenticeship: An AI Learning Platform for Law Clinics and Beyond: Brian Rhindress and Matt Samach

Session 4: NLP for access to justice

  • Demo: LIA: An AI-Powered Legal Information Assistant to Close the Access to Justice Gap, Scheree Gilchrist and Helen Hobson
  • Using Chat-GPT to Extract Principles of Law for the Sake of Prediction: an Exploration conducted on Italian Judgments concerning LGBT(QIA+) Rights, Marianna Molinari, Marinella Quaranta, Ilaria Angela Amantea and Guido Governatori
  • Legal Education and Knowledge Accessibility by Legal LLM, Sieh-Chuen Huang, Wei-Hsin Wang, Chih-Chuan Fan and Hsuan-Lei Shao
  • Evaluating Generative Language Models with Argument Attack Chains, Cor Steging, Silja Renooij and Bart Verheij

Session 5: Data quality, narratives, and safety issues

  • Potential Risks of Using Justice Tech within the Colombian Judicial System in a Rural Landscape, Maria Gamboa
  • Decoding the Docket: Machine Learning Approaches to Party Name Standardization, Logan Pratico
  • Demo: CLEO’s narrative generator prototype: Using GenAI to help unrepresented litigants tell their stories, Erik Bornmann
  • Analyzing Images of Legal Documents: Toward Multi-Modal LLMs for Access to Justice: Hannes Westermann and Jaromir Savelka
Categories
AI + Access to Justice Current Projects

Opportunities & Risks for AI, Legal Help, and Access to Justice

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.

Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?

Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?

The Stakeholder Session

In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.

The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.

An overview of what we covered in our stakeholder session with justice professionals.

The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.

  1. How AI could help professionals like legal aid or court staff improve their service offerings
  2. How AI could help community members & providers do legal problem-solving tasks
  3. How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.

Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently

The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:

  1. Helping experts create user-friendly guides to legal processes & rights
  2. Improving the speed & efficacy of tech tool development
  3. Strengthening providers’ ability to connect with clients & build a strong relationship
  4. Streamlining intake and referrals, and improving the creation of legal documents.

Within each of these zones, participants had many specific ideas.

Opportunities for legal aid & court staff to use AI to deliver better services

Opportunity 2: For People & Providers to Do Legal Tasks

The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:

  1. understanding legal rules and policies,
  2. identifying legal issues and directing a person to their menu of legal options,
  3. predicting likely outcomes and facilitating mutual resolutions,
  4. preparing legal documents and evidence, and
  5. aiding in the preparation for in-person presentations and negotiations.
How might AI help people understand their legal problem & navigate it to resolution?

Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.

Opportunity 3: For Org Leadership, Policymaking & Strategies

The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:

  1. training and supporting service providers more efficiently,
  2. optimizing business processes and resource allocation, and
  3. helping leaders identify policy issues and create impactful strategies.
AI opportunities to help justice system leaders

Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.

Possible Risks & Harms of AI in Civil Justice

While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.

The whiteboard of professionals’ 1st round of brainstorming about possible risks to mitigate for a future of AI in the civil justice system

This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.

Infrastructure & initiatives to prioritize now

Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?

The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include

  • ethical data sharing and model building protocols,
  • the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
  • the establishment of benchmarks to measure the performance of AI in legal use cases,
  • the adoption of ethical and professional rules for AI use,
  • recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
  • the creation of guides for litigants and policymakers on using AI for legal help.

Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.

We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.