Categories
AI + Access to Justice Current Projects

AI and A2J webinar series: Hannes Westermann — The Hammer and the Carpenter

By Nóra Al Haider

When you explore the online legal community space, you’ll notice that the topic of AI and Access to Justice (A2J) comes up quite frequently. Most agree that AI & A2J is a consequential topic. However, even though the importance of the topic is recognized, the phrase itself remains nebulous. What do we mean by AI & A2J? What kind of experiments are researchers and practitioners conducting? How are new tools and projects evaluated?

To delve into these questions, the Stanford Legal Design Lab recently initiated a webinar series. Each month, a presenter is invited to discuss a study or project in the AI & A2J space. Attendees can learn about new AI & A2J projects, ask questions, make connections, and find new ideas or protocols for their own work. The ultimate goal is to build more collaborations between researchers, service providers, technologists and policymakers.

The inaugural presenter for the webinar was Hannes Westermann. Hannes is an Assistant Professor in Law & Artificial Intelligence at Maastricht University and the Maastricht Law and Tech Lab. His current research focuses on using generative AI to enhance access to justice in a safe and practical manner.

Generative AI for Access to Justice

Generative AI can be a valuable addition in the access to justice space. Laypeople often have difficulty resolving their everyday legal issues, from debt to employment problems. This struggle is partly due to the complexity of the law. It is challenging for people to understand how laws apply to them and what actions they should take regarding their legal issues, assuming they can identify they have a legal issue in the first place.

Moreover, the cost of going to court can be high, not just financially but also in terms of time and emotional stress. This is particularly true if individuals are unaware of how the process works and how they should interact with the court system.

Generative AI could help alleviate some of these issues and create more opportunities for users to interact with the legal system in a user-centered way.

Hannes spotlighted three projects during his presentation that address these issues.

Justice Bot: Aiding Landlord-Tenant Disputes

The JusticeBot project, developed at the Cyberjustice Laboratory in Montreal, provides legal information to laypeople. The first version, available at https://justicebot.ca (in French) addresses landlord-tenant disputes. The bot asks users questions about their landlord-tenant issues and provides legal information based on their responses. Users start by indicating whether they are a landlord or tenant. Based on their choice, the system presents a series of questions tailored to common issues in that category, such as unpaid rent or early lease termination. The system guides users through these questions, providing explanations and references to relevant legal texts.

For instance, if a landlord indicates that their tenant is frequently late with rent payments, the JusticeBot asks follow-up questions to determine the frequency and impact of these late payments. It then provides information on the landlord’s legal rights and possible next steps, such as terminating the lease or seeking compensation.

The team at the Cyberjustice Laboratory collaborated with the Tribunal administratif du logement (TAL) in developing and marketing the JusticeBot. The TAL receives over 70,000 claims and over a million calls annually. By automating the initial information-gathering process, the JusticeBot could potentially alleviate some of this demand, allowing users to resolve issues without immediate legal intervention. So far, the JusticeBot has been used over 35k times.

The first iteration of the bot was built like a logic tree, where there was a logical connection between the questions and answers, making it possible to verify the accuracy of the legal information. In recent years, Westermann and his team have experimented with integrating language models such as GPT-4 into the JusticeBot (see here and here). This hybrid approach could ensure the accuracy of the information while enhancing the human-centered interface of the bot, and increase the efficiency of creating new bots.

DALLMA: Document Assistance

The next project Hannes discussed is DALLMA. DALLMA stands for Document Automation, Large Language Model Assisted. This early-stage project aims to automate the drafting of legal documents using large language models. The current version focuses on forms, as people often find them complicated to fill out. AI is utilized to fill in structured information into legal documents. Users provide basic information and context, and the AI assists in structuring and populating the document with relevant legal content. In the future, this could increase efficiency in drafting forms and other legal documents.

LLMediator: Enhancing Online Dispute Resolution

The LLMediator explores the use of large language models in online dispute resolution (ODR). The LLMediator makes suggestions on how to phrase communications more amicably during disputes. It analyzes the content and sentiment of the message to prevent escalation and promote resolution. For example, if the LLMediator detects aggressive language that could escalate the conflict, the AI might suggest more constructive phrasing. It can also suggest a potential intervention message to a human mediator, supporting them in their work of encouraging a friendly resolution to the dispute. In short, it acts as a virtual assistant that supports mediators and parties by providing suggestions while still allowing the user to make the final decision.

The Challenges of AI

The projects presented by Hannes show the promise of integrating LLM into the A2J space. However, it is important to be aware of the challenges involved in integrating such instruments into the legal system. One issue is hallucinations. This occurs when the AI generates plausible-sounding but incorrect information, which can be an issue in the legal domain. Hannes explains that this happens because the AI predicts the most probable continuation of a phrase based on its training data, but it does not guarantee accuracy. More research needs to be done to find ways to mitigate these issues. One potential solution is the conceptualization of systems as “augmented intelligence”, as demonstrated e.g. in the LLMediator project. In this approach, the AI system does not provide predictions or recommendations to the user. Rather, it provides information or suggestions that can help the users make better decisions or accomplish tasks more efficiently.

Another potential solution would be to combine AI systems with transparent, logical reasoning systems, as shown e.g. in DALLMA. This approach has the potential to combine the power of large language models with legal expert knowledge, to ensure that users receive accurate legal information. This approach could also help tackle biases that may be present in the training datasets of AI models.

Privacy is another concern, especially in the legal field, which deals with large amounts of sensitive and confidential information. This data can be sent to external servers when using large language models. However, Hannes notes that recent developments in AI technology have led to powerful local AI models that offer more privacy protections. AI providers could also offer contractual guarantees of data protection.

To make sure that AI is implemented in a safe and practical manner in the legal system, it is important to keep these and other challenges in mind. Potential ways of mitigating the challenges include technical innovations and evaluations, regulatory and ethical considerations, guidelines for use of AI in legal contexts, and education for users about the limitations of AI and the importance of verifying the information received through AI models.

Future direction

Hannes concludes his presentation by stating that generative AI should be viewed as a powerful tool that augments human intelligence. The analogy he uses is that of the hammer and carpenter.

“Will law be replaced by AI is a bit like asking: ‘Will the carpenter be replaced by the hammer?’ It just kind of doesn’t make sense as a question, as the hammer is a tool used by the carpenter and not as a replacement for them.”

AI is a powerful tool that can be a useful addition to use cases in the access to justice space. More research needs to be done to better understand the use cases and evaluate the tools. Hannes hopes that the community will engage with the systems and understand what they have to offer so that we can leverage AI to increase access to justice in a safe way.

Read more about Hannes’ work here:

https://scholar.google.com/citations?user=rJvk-twAAAAJ&hl=en

Categories
Current Projects

Court Observation Hub

Nóra Al Haider, Oct 21, 2021

“Please wait for the host to start this meeting”

Nowadays, in many jurisdictions, litigants can opt to use Zoom to access their hearing. This is one of the many effects that the pandemic had on the legal system. Webex, Teams and Zoom are starting to feel like a regular part of courts.

Virtual courts. Illustration by Nóra Al Haider

As with all new developments this change poses opportunities and challenges that we will delve into in future publications. Online courts have not only affected how ‘regular’ stakeholders, such as litigants, judges, court clerks and lawyers, navigate the legal system. Easy access to hearings also means that anyone with an interest in a case can easily Zoom in as a court watcher. Community members, journalists, activists and advocates do not have to take time out of their day to drive to a courthouse, stand in line, go through security to then finally be able to attend a hearing. Nowadays most hearings are just a click away for those who are interested.

Online courts increased the amount of court observation groups around the country. In essence, court observation groups are community driven clubs that structurally observe hearings in their jurisdiction. These groups do not only draw attention to individual cases, but can also, due to the sheer number of observers, detect structural problems in the system. This development is incredibly interesting. The increased interest in court observation groups will be an opportunity for academics and non-profits to work together with community partners to unearth and gather more data about structural issues in the legal system.

The development of court observation groups has been cheered on by many people, including non-legal professionals. The singer Fiona Apple used the Grammys to bring more attention to virtual courts and encouraged people to join their local court watch groups:

Chief Justice McCormack has stated several times that court livestreams increase transparency:

This increase in attention for court watchers and the ease of accessibility boosted the interest of many individuals to join an observation group. To facilitate this process, we developed the Court Observation Hub at the Legal Design Lab. This hub provides an overview of links to online proceedings and court watch groups in different jurisdictions.

https://virtuallegal.systems/observation/

The hub also gives an overview of tools on how to set up court watch groups. Hopefully, in the future we’ll be able to expand this website with measurement instruments that are free to use by community court watchers. It could have a monumental impact on the legal system if community groups are able to systematically collect and share information. This development could trigger positive policy changes, increase transparency in the legal system and strengthen the rule of law.

Visit the Court Observation Hub at https://virtuallegal.systems/observation/