At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.
There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.
The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).
This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.
Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.
What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?
At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:
Identify the key challenges and opportunities for using AI to improve access to justice.
Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
Develop proposals for how to hold AI models accountable for the public good.
Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.
Organizersof the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).
Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands
More about the JURIX Conference
The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.
This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.
The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.
In October 2023, Margaret Hagan presented at the International Access to Justice Forum, on “Paths toward Access to Justice at Scale”. The presentation covered the preliminary results of stakeholder interviews she is conducting with justice professionals across the US about how best to scale one-off innovations and new ideas for improvements, to become more sustainable and impactful system changes.
Pilots to increase access to justice are happening in local courts, legal aid groups, government agencies, and community groups around the globe. These innovative new local services, technologies, and policies aim to build people’s capability, reduce barriers to access, and improve the quality of justice people receive. They are often built with an initial short-term investment, to design the pilot and run it for a period. Most of them lack a clear plan to scale up to a more robust iteration, or spread to other jurisdictions, or sustain the program past the initial investment. This presentation presents a framework of theories of change for the justice system, and stakeholders’ feedback on how to use them for impact.
The research on Access to Justice long-term strategies
The presentation covered the results of the qualitative, in-depth interviews with 11 legal aid lawyers, court staff members, legal technologists, funders, and statewide justice advocates about their work, impact, and long-term change.
The research interviews asked these professionals about their long-term, systematic theories of change — and to rate other theories of change that others have mentioned. They were asked about past projects they’ve run, how they have made an impact (or not), and what they have learned from their colleagues about what makes a particular initiative more impactful, sustainable, and successful.
The goal of the research interviews was to gather the informal knowledge that various professionals have gathered over years of work in reforming the justice system and improving people’s outcomes when they experience legal problems.
This knowledge often circulates casually at meetings, dinners, and over email, but is not often laid out explicitly or systematically. It was also to encourage reflection among practitioners, to move from a focus just on day-to-day work to long-term impact.
Stay tuned for more publications about this research, as the interviews & synthesis continue.
This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe.
I was at a table focused on the intersection of Legal Design, AI platforms, and privacy protections.
Our multidisciplinary group spent several hours getting concrete: what are the scenarios and user needs around privacy & AI platforms? What are the main concerns and design challenges?
We then moved towards an initial brainstorm. What ideas for interventions, infrastructure, or processes could help move AI platforms towards greater privacy protections — and avoid privacy problems that have arisen in similar technology platform advancements in the recent past? What could we learn from privacy challenges, solutions, and failures that came with the rise of websites on the open Internet, the advancement of search engines, and the popular use of social media platforms?
Our group circled around some promising, exciting ideas for cross-Atlantic collaboration. Here is a short recap of them.
Learning from the User Burdens of Privacy Pop-ups & Cookie Banners
Can we avoid putting so many burdens on the user, like with cookie banners and privacy pop-ups on every website? We can learn from the current crop of privacy protections, which warn European visitors when they open any new website and require them to read, choose, and click through pop-up menus about cookies, privacy, and more. What are ways that we can lower these user burdens and privacy burn-out interfaces?
Smart AI privacy warnings, woven into interactions
Can the AI be smart enough to respond with warnings when people are crossing into a high-risk area? Perhaps instead of generalized warnings about privacy implications — a conversational AI agent can let a person know when they are sharing data/asking for information that has a higher risk of harm. This might be when a person asks a question about their health, finances, personal security, divorce/custody, domestic violence, or another topic that could have damaging consequences to them if others (their family members, financial institutions, law enforcement, insurance companies, or other third parties) found out. The AI could be programmed to be privacy-protective, to easily let a person choose at the moment about whether to take the risk of sharing this sensitive data, to help a person understand the risks in this specific domain, and to help the person delete or manage their privacy for this particular interaction.
Choosing the Right Moment for Privacy Warnings & Choices
Can warnings and choices around privacy come during the ‘right moment’? Perhaps it’s not best to warn people before they sign up for a service, or even right when they are logging on. This is typically when people are most hungry for AI interaction & information. They don’t want to be distracted. Rather, can the warning, choices, and settings come during the interactions — or after it? A user is likely to have ‘buyer’s remorse’ with AI platforms: did I overshare? Who can see what I just shared? Could someone find out what I talked about with the AI? How can privacy terms & controls be easily accessible right when people need it, usually during these “clean up” moments?
Conducting More Varied User Research about AI & Protections
We need more user research in different cultures and demographics about how people use AI, relate to it, and critique it (or do not). To figure out how to develop privacy protections, warning/disclosure designs, and other techno-policy solutions, first we need a deeper understanding of various AI users, their needs and preferences, and their willingness to engage with different kinds of protections.
Building an International Network Working on AI & Privacy Protections
Could we have anchor universities, with strong computer science, policy, and law departments, that host workshops and training on the ethical development of AI platforms? These could help bring future technology leaders into cross-disciplinary contact with people from policy and law, to learn about social good matters like privacy. These cross-disciplinary groups could also help policy & law experts learn how to integrate their principles and research into more technical form, like by developing labeled datasets and model benchmarks.—Are you interested in ensuring there is privacy built into AI platforms? Are you working on user, technical, or policy research on what the privacy scenarios, needs, risks, and solutions might be on AI platforms? Please be in touch!Thank you to Dr. Monica Palmirani for leading the Legal Design group at the State of Privacy event, at the lovely Museo Nazionale Etrusco di Villa Giulia in Rome.
In July, an interdisciplinary group of researchers at Stanford hosted the “AI and Legal Help Crossover” event, for stakeholders from the civil justice system and computer science to meet, talk, and identify promising next steps to advance the responsible development of AI for improving the justice system.
Here are 3 topic areas that arose out of this workshop, that we’re excited to work on more in the future!
Topic 1: Next Steps for advancing AI & Justice work
These are the activities that participants highlighted as valuable for
Events that dive into AI applications, research, and evaluation in specific areas of the justice system. For example, could we hold meetings that focus in on specific topics, like:
High volume, quick proceedings like for Debt, Traffic, Parking, and Eviction. These case types might have similar dynamics, processes, and litigant needs.
What are the ideas for applications that could improve the quality of justice and outcomes in these areas?
What kinds of research, datasets, and protocols might be done in these areas in particular, that would matter to policymakers, service providers, or communities?
Innovation hot areas like Eviction Diversion and Criminal Justice Diversion, where there already are many pilots happening to improve outcomes. If there is already energy to pilot new interventions in a particular area, can we amplify these with AI?
Local Justice/AI R&D Community-building, to have regional hubs in areas where there are anchor institutions with AI research/development capacity & those with justice system expertise. Can we have a network of local groups who are working on improving AI development & research? And where local experts in computer science can learn about the opportunities for work with justice system actors — so that they are informed & connected to do relevant work.
Index of Justice/AI Research, Datasets, Pilots, and Partners, so that more people new to this area (both from technical and legal backgrounds) can see what is happening, build relationships, and collaborate with each other.
Domain Expert Community meetings that could attract more legal aid lawyers, self-help court staff, clerks, navigators, judicial officers, and those who have on-the-ground experience with helping litigants through the court system. Could we start gathering and standardizing their expertise — into more formal benchmarks, rating scales, and evaluation protocols?
Unauthorized Practice of Law & Regulatory Discussions to talk through where legal professional rules might play out with AI tools – -and how they might be interpreted or adapted to best protect people from harm while benefiting people with increased access and empowerment.
National Group Leadership and Support, in which professional groups and consortia like the Legal Services Corporation, State Justice Institute Joint Technology Committee, CiTOC, Bureau of Justice Statistics, Department of Justice, or National Center of State Courts could help:
Define an agenda for projects, research, and evaluation needs
Encourage groups to standardize data and make it available
Call for pilots and partnerships, and help with the matchmaking of researchers, developers, and courts/legal aid groups
Incentivize pilots and evaluation with funding dedicated to human-centered AI for justice
Topic 2: Taskswhere AI might help with justice systems research.
AI for litigant decision-making, to help a person understand possible outcomes that may result from a certain claim, defense, or choice they make in the justice system. It could help them be more strategic with their choices, wording, etc.
AI to craft better claims and arguments so that litigants or their advocates could understand the strongest claims, arguments, citations, and evidence to use.
AI for narratives and document completion, to help a litigant quickly from their summary of their facts and experiences to a properly formatted and written court filing.
AI for legalese to plain language translation, that could help a person understand a notice, contract, court order, or other legal document they receive.
Service Improvement themes
AI for legal aid or court centers to intake/triage users to the right issue area, level of service, and resources they can use.
AI for chat and coaching, to package experts’ knowledge about following a court process, filling in a form, preparing for a hearing, or other legal tasks.
AI to spot policy/advocacy targets, where legal aid advocates, attorney general offices, or journalists could see which courts or judges might have issues with the quality of justice in their proceedings, where more training or advocacy for change might be needed.
AI to spot fraud, bad practices, and concerning trends. For example, can it scan petitions being filed in debt cases to flag to clerks where the dates, amounts, or claims mean that the claim is not valid? Or can it look through settlement agreements in housing or debt cases to find concerning terms or dynamics?
Research & System Design themes
AI to understand where processes need simplification, or where systems need to be reformed. They could understand through user error rates, continuances, low participation rates, or other factors which parts of the justice system are the least accessible — and where rules, services, and technology needs reform.
AI for understanding the court’s performance, to see what is happening not only in the case-level data but also at the document-level. This could give much more substance to what processes and outcomes people are experiencing.
Topic 3: Data that will be important to make progress on Justice & AI
Legal Procedure, Services, and Form data, that has been vetted by experts and approved as up to date. This data might then train a model of ‘reliable, authoritative’ legal information for each jurisdiction about what a litigant should know when dealing with a certain legal problem.
Could researchers work with the LSC and local legal aid & court groups that maintain self-help content (legal help websites, procedural guides, forms, etc.) to gather this local procedural information – -and then build a model that can deliver high-quality, jurisdiction-specific procedural guidance?
Court Document data, that includes the substance of pleadings, settlements, and judgments. Access to datasets with substantive data about the claims litigants are making, the terms they agree to, and the outcomes in judgments can give needed information for research about the court, and also AI tools for litigants & service providers that analyze, synthesize, and predict.
Could courts partner with researchers to make filings and settlement documents available, in an ethical/privacy-friendly way?
Domain Expert data in which they help rate or label legal data. What is good or bad? What is helpful or harmful? Building successful AI pilots will need input and quality control from domain experts — especially those who see how legal documents, processes, and services play out in practice.
Could justice groups & universities help bring legal experts together to help define standards, label datasets, and give input on the quality of models’ output? What are the structures, incentives, and compensation needed to get legal experts more involved in this?
As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.
Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?
Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?
The Stakeholder Session
In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.
The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.
The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.
How AI could help professionals like legal aid or court staff improve their service offerings
How AI could help community members & providers do legal problem-solving tasks
How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.
Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently
The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:
Helping experts create user-friendly guides to legal processes & rights
Improving the speed & efficacy of tech tool development
Strengthening providers’ ability to connect with clients & build a strong relationship
Streamlining intake and referrals, and improving the creation of legal documents.
Within each of these zones, participants had many specific ideas.
Opportunity 2: For People & Providers to Do Legal Tasks
The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:
understanding legal rules and policies,
identifying legal issues and directing a person to their menu of legal options,
predicting likely outcomes and facilitating mutual resolutions,
preparing legal documents and evidence, and
aiding in the preparation for in-person presentations and negotiations.
Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.
Opportunity 3: For Org Leadership, Policymaking & Strategies
The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:
training and supporting service providers more efficiently,
optimizing business processes and resource allocation, and
helping leaders identify policy issues and create impactful strategies.
Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.
Possible Risks & Harms of AI in Civil Justice
While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.
This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.
Infrastructure & initiatives to prioritize now
Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?
The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include
ethical data sharing and model building protocols,
the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
the establishment of benchmarks to measure the performance of AI in legal use cases,
the adoption of ethical and professional rules for AI use,
recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
the creation of guides for litigants and policymakers on using AI for legal help.
Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.
We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.
More about the May event from the American Academy : “Increasingly capable AI tools like Chat GPT and Bing Chat will impact the accessibility, reliability, and regulation of legal and other professional services, like healthcare, for an underserved public. In this event, Jason Barnwell, Margaret Hagan, and Andrew M. Perlman discuss these and other implications of AI’s rapidly evolving capabilities.”‘
You can see a recording of the panel, that featured Jason Barnwell (Microsoft), Margaret Hagan (Stanford Legal Design Lab), and Andrew M. Perlman (Suffolk Law School).
The National Center for State Courts has a working group of justice leaders who have released a December 2022 report “Just Horizons” — pointing to systemic vulnerabilities in the court system and opportunities for building a better future, where the institutions are strong, the public is served, and there is a healthy justice ecosystem.
There are 6 areas of Vulnerabilities & Opportunities laid out, where justice institutions can invest to strengthen their performance, impact, and infrastructure. These include a focus on the future, using data, private entity management, emergency preparedness, future-ready workforce, and user-centered design.
These areas all resonate with our work at the Legal Design Lab:
A focus on the intentional design of civic systems: government institutions like courts should be looking to the future to see what kinds of services, relationships, infrastructure, and outcomes they want to be having. They should work with diverse stakeholders to set this future agenda, and scope out projects to reach this better future.
Prioritizing the users of the government institutions, rather than just the institutions themselves: many times, when we talk about what government institutions like courts should be doing, the focus is on internal metrics — efficiency, cases closed, calendaring, staffing, and budget. These are important factors, but they must be balanced out with metrics that reflect the interests of the people the institutions are meant to help and serve.
Participation of diverse stakeholders in providing feedback about the status quo, finding and scoping new solutions, and evaluating new revisions to ensure they are serving the audiences as intended.
As more courts use text messages to improve litigants’ access to justice, many wonder exactly how to set up texting.
What are the words, schedule, and flow of text messages for a court to use?
From our experience with working with criminal, traffic, housing, and other civil courts in doing text message projects, we have some options for courts to use.
There are 3 models:
Reminders including services/self-help
Interactive flow with services and accommodation help
We go through each texting model and share a sample script that courts have used in the past.
Texting Model 1: multiple-day countdown to the hearing.
The goal of this reminder text message flow is to increase participation rates, and reduce default rates for a hearing. It is a straightforward reminder script, focused on dates, times, and locations. There is also a mention of the consequences of default.
Texting Model 2: Reminder Plus Services
A second model for court reminders adds more services into the messages.
It can have a 10-5-1 day reminder schedule, but the messages also add in links and referrals to self-help services.
These could be links to workshops, self-help centers, hotlines, right to counsel, navigators, or other expert assistance services.
Texting Model 3: Interactive Service and Accommodations
In this model, the focus is less on reminders and more on connections to services.
The text message flow would start if the litigant texts a keyword into a court phone number, signaling that they want to find services or accommodation. The court summons might have this phone number and keyword on it, for litigants to follow.
The message flow is interactive, allowing the litigant to choose what kind of information or connections they would like.
The court sets up the interactive menu of options and provides phone numbers, hours, and other details about how to get legal aid, assistance, self-help, court directions, and disability or language accommodations.