As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.
Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?
Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?
The Stakeholder Session
In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics.
The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.
The stakeholders discussed 3 main areas where AI could enhance access to justice and provide more help to individuals with legal problems.
- How AI could help professionals like legal aid or court staff improve their service offerings
- How AI could help community members & providers do legal problem-solving tasks
- How AI could help executives, funders, advocates, and community leaders better manage their organizations, train others, and develop strategies for impact.
Opportunity 1: for Legal Aid & Court Service Providers to deliver better services more efficiently
The first opportunity area focused on how AI could assist legal aid providers in improving their services. The participants identified four ways in which AI could be beneficial:
- Helping experts create user-friendly guides to legal processes & rights
- Improving the speed & efficacy of tech tool development
- Strengthening providers’ ability to connect with clients & build a strong relationship
- Streamlining intake and referrals, and improving the creation of legal documents.
Within each of these zones, participants had many specific ideas.
Opportunity 2: For People & Providers to Do Legal Tasks
The second opportunity area focused on empowering people and providers to better perform legal tasks. The stakeholders identified five main ways AI could help:
- understanding legal rules and policies,
- identifying legal issues and directing a person to their menu of legal options,
- predicting likely outcomes and facilitating mutual resolutions,
- preparing legal documents and evidence, and
- aiding in the preparation for in-person presentations and negotiations.
Each of these 5 areas of opportunities is full of detailed examples. Professionals had extensive ideas about how AI could help lawyers, paraprofessionals, and community members do legal tasks in better ways. Explore each of the 5 areas by clicking on the images below.
Opportunity 3: For Org Leadership, Policymaking & Strategies
The third area focused on how AI could assist providers and policymakers in managing their organizations and strategies. The stakeholders discussed three ways AI could be useful in this zone:
- training and supporting service providers more efficiently,
- optimizing business processes and resource allocation, and
- helping leaders identify policy issues and create impactful strategies.
Explore the ideas for better training, onboarding, volunteer capacity, management, and strategizing by clicking on the images below.
Possible Risks & Harms of AI in Civil Justice
While discussing these opportunity areas, the stakeholders also addressed the risks and harms associated with the increased use of AI in the civil justice system. Some of the concerns raised include over-reliance on AI without assessing its quality and reliability, the provision of inaccurate or biased information, the potential for fraudulent practices, the influence of commercial actors over public interest, the lack of empathy or human support in AI systems, the risk of reinforcing existing biases, and the unequal access to AI tools.
This list of risks is not comprehensive, but it offers a first typology that future research & discussions (especially with other stakeholders, like community members and leaders) can build upon.
Infrastructure & initiatives to prioritize now
Our discussion closed out with a discussion of practical next steps. What can our community of legal professionals, court staff, academics, and tech developers be doing now to build a better future in which AI helps close the justice gap — and where the risks above are mitigated as much as possible?
The stakeholders proposed several infrastructure and strategy efforts that could lead to this better future. These include
- ethical data sharing and model building protocols,
- the development of AI models specifically for civil justice, using trustworthy data from legal aid groups and courts to train the model on legal procedure, rights, and services,
- the establishment of benchmarks to measure the performance of AI in legal use cases,
- the adoption of ethical and professional rules for AI use,
- recommendations for user-friendly AI interfaces, that can ensure people understand what the AI is telling them & how to think critically about the information it provides, and
- the creation of guides for litigants and policymakers on using AI for legal help.
Thanks to all the professionals who participated in the Spring 2023 session. We look forward to a near future where AI can help increase access to justice & effective court and legal aid services — while also being held accountable and its risks being mitigated as much as possible.
We welcome further thoughts on the opportunity, risk, and infrastructure maps presented above — and suggestions for future events to continue building towards a better future of AI and legal help.
One reply on “Opportunities & Risks for AI, Legal Help, and Access to Justice”
[…] This builds off of our Spring workshop, co-hosted with the Self-Represented Litigants Network, that led justice professionals through a brainstorm of how AI could help them and their clients around access to justice. […]