Categories
AI + Access to Justice Current Projects

AI+A2J 2025 Summit Takeaways

The Stanford Legal Design Lab hosted its second annual AI & Access to Justice Summit as a gathering for leaders from legal aid organizations, technology companies, academia, philanthropists, and private practice. This diverse assembly of professionals gathered to discuss the potential of generative AI, and — most crucially at this moment of Autumn 2025 — to strategize about how to make AI work at scale to address the justice gap.

The summit’s mission was clear: to move beyond the hype cycle and forge a concrete path forward for a sustainable AI & A2J ecosystem across the US and beyond. The central question posed was how the legal community could work as an ecosystem to harness this technology, setting an agenda for 2, 5, and 10-year horizons to create applications, infrastructure, and new service/business models that can get more people access to justice.

The Arc of the Summit

The summit was structured over 2 days to help the diverse participants learn about AI tools, pilots, case studies, and lessons learned for legal teams — and then giving the participants the opportunity to design new interventions and strategies for a stronger AI R&D ecosystem.

Day 1 was dedicated to learning and inspiration, featuring a comprehensive slate of speakers who presented hands-on demonstrations of cutting-edge AI tools, shared detailed case studies of successful pilots, and offered insights from the front lines of legal tech innovation.

Day 1 -> Day 2’s mission

Day 2 was designed to shift the focus from listening to doing, challenging attendees to synthesize the previous day’s knowledge into strategic designs, collaborative agendas, and new partnerships. This structure was designed to build a shared foundation of knowledge before embarking on the collaborative work of building the future.

The Summit began by equipping attendees with a new arsenal of technological capabilities, showcasing the tools that serve as the building blocks for this new era in justice.

Our Key AI + A2J Ecosystem Moment

The key theme of this year’s AI+A2J Summit is building a strong, coordinated R&D ecosystem. This is because our community of legal help providers, researchers, public interest tech-builders, and strategists are at a key moment.

It’s been over 3 years now since the launch of ChatGPT. Where are we going with AI in access to justice?

We are several years into the LLM era now — past the first wave of surprise, demos, and hype — and into the phase where real institutions are deciding what to do with these tools. People are already using AI to solve problems in their everyday lives, including legal problems, whether courts and legal aid organizations are ready or not. That means the “AI moment” is no longer hypothetical: it’s shaping expectations, workflows, and trust right now. But still many justice leaders are confused, overwhelmed, or unsure about how to get to positive impact in this new AI era.

Leaders are not sure how to make progress.

This is exactly why an AI+A2J Summit like this matters. We’re at a pivot point where the field can either coordinate and build durable public-interest infrastructure — or fragment into disconnected experiments that don’t translate into meaningful service capacity. A2J leaders are balancing urgency with caution, and the choices made in the next year or two will set patterns that could last a decade: what gets adopted, what gets regulated, what gets trusted, and what gets abandoned.

What will 2030 look like for A2J?

We have possible rosy futures and we have more devastating ones.

Which of these possible near futures will we have in 2030 for access to justice?

A robust, accessible marketplace of services — where everyone having a problem with their landlord, debt collector, spouse, employer, neighbor, or government can easily get the help they need in the form they want?

Or will we have a hugely underserved public, that’s frustrated and angry, facing an ever-growing asymmetry of robo-filed lawsuits and relying on low-quality AI help?

What is stopping great innovation imapact?

Some of the key things that could stop our community from delivering great outcomes in the next five years include a few big trends:

  • too much chilling regulation,
  • under-performing and -safety tested solutions that lead to bad harms and headlines,
  • not enough money flowing to get to solutions, everyone reinventing the wheel on their own and deliverting fragile and costly local solutions, and
  • a lack of a building substantive, meaninful solutions — instead focusing on small, peripheral tasks.

The primary barriers are not just technical — they’re operational, institutional, and human. Legal organizations need tools that are reliable enough to use with real people, real deadlines, and real consequences. But today, many pilots struggle with consistency, integration into daily workflows, and the basic “plumbing” that makes technology usable at scale: identity management, knowledge management, access controls, and clear accountability when something goes wrong.

Trust is also fragile in high-stakes settings, and the cost of a failure is unusually high. A single under-tested tool can create public harm, undermine confidence internally, and trigger an overcorrection that chills innovation. In parallel, many organizations are already stretched thin and running on complex legacy systems. Without shared standards, shared evaluation, and shared implementation support, the burden of “doing AI responsibly” becomes too heavy for individual teams to carry alone.

At the Summit, we worked on 3 different strategy levels to try to prevent these blocks from pushing us to low impact or a continued status quo.

3 Levels of Strategic Work to Set us towards a Good Ecosystem

The goal of the Summit was to get leaders from across the A2J world to clearly define 3 levels of strategy. That means going beyond the usual strategic track — which is just defining the policies and tech agenda for their internal organization.

This meant focusing on both project mode (what are cool ideas and use cases) and also strategy mode — so we can shape where this goes, rather than react to whatever the market and technology delivers. We’re convening people who are already experimenting with AI in courts, legal aid, libraries, and community justice organizations, and we’re asking them to step back and make intentional choices about what they will build, buy, govern, and measure over the next 12–24 months. The point is to move from isolated pilots to durable capacity: tools that can be trusted, maintained, and integrated into real workflows, with clear guardrails for privacy, security, and quality.

To do that, the Summit is designed to push work at three linked levels of strategy.

The 3 levels of straegy

Strategy Level 1: Internal Org Strategy around AI

First is internal, organizational strategy: what each institution needs to do internally — data governance, procurement standards, evaluation protocols, staff training, change management, and the operational “plumbing” that makes AI usable and safe.

Strategy 2: Ecosystem Strategy

Second is ecosystem strategy, that covers how different A2J organizations can collaborate to increase capacity and impact.

Thinking through an Ecosystem approach to share capacity and improve outcomes

This can scope out what we should build together — shared playbooks, common evaluation and certification approaches, interoperable data and knowledge standards, and shared infrastructure that prevents every jurisdiction from reinventing fragile, costly solutions.

Strategy 3: Towards Big Tech & A2J

Third is strategy vis-à-vis big tech: how the justice community can engage major AI platform providers with clear expectations and leverage — so the next wave of product decisions, safety defaults, partnerships, and pricing structures actually support access to justice rather than widen gaps.

As more people and providers go to Big Tech for their answers and development work, how do we get to better A2J impact and outcomes?

The Summit is ultimately about making a coordinated, public-interest plan now — so that by 2030 we have a legal help ecosystem that is more trustworthy, more usable, more interoperable, and able to serve far more people with far less friction.

The Modern A2J Toolbox: A Growing set of AI-Powered Solutions

Equipping justice professionals with the right technology is a cornerstone of modernizing access to justice. The Summit provided a tour of AI tools available to the community, ranging from comprehensive legal platforms designed for large-scale litigation to custom-built solutions tailored for specific legal aid workflows. This tour of the growing AI toolbox revealed an expanding arsenal of capabilities designed to augment legal work, streamline processes, and extend the reach of legal services.

Research & Case Management Assistants

Teams from many different AI and legal tech teams presented their solutions and explained how they can be used to expand access to justice.

  • Notebook LM: The Notebook LM tool from Google empowers users to create intelligent digital notebooks from their case files and documents. Its capabilities have been significantly enhanced, featuring an expanded context window of up to 1 million tokens, allowing it to digest and analyze vast amounts of information. The platform is fully multilingual, supporting over 100 languages for both queries and content generation. This enables it to generate a wide range of work products, from infographics and slide decks to narrated video overviews, making it a versatile tool for both internal analysis and client communication.
  • Harvey: Harvey is an AI platform built specifically for legal professionals, structured around three core components. The Assistant functions as a conversational interface for asking complex legal questions based on uploaded files and integrated research sources like LexisNexis. The Vault serves as a secure repository for case documents, enabling deep analysis across up to 10,000 different documents at once. Finally, Workflows provide one-click solutions for common, repeatable tasks like building case timelines or translating documents, with the ability for organizations to create and embed their own custom playbooks.
  • Thomson Reuters’ CoCounsel: CoCounsel is designed to leverage an organization’s complete universe of information — from its own internal data and knowledge management systems to the primary law available through Westlaw. This comprehensive integration allows it to automate and assist with tasks across the entire client representation lifecycle, from initial intake and case assessment to legal research and discovery preparation. The platform is built to function like a human colleague, capable of pulling together disparate information sources to efficiently construct the building blocks of legal practice. TR also has an AI for Justice program that leverages CoCounsel and its team to help legal aid organizations.
  • VLex’s Vincent AI: Vincent AI adopts a workflow-based approach to legal tasks, offering dedicated modules for legal research, contract analysis, complaint review, and large-scale document review. Its design is particularly user-friendly for those with “prompting anxiety,” as it can automatically analyze an uploaded document (such as a lease or complaint) and suggest relevant next steps and analyses. A key feature is its ability to process not just text but also audio and video content, opening up powerful applications for tasks like analyzing client intake calls or video interviews to rapidly identify key issues.

AI on Case Management & E-Discovery Platforms

  • Legal Server: As a long-standing case management system, Legal Server has introduced an AI assistant named “Ellis.” The platform’s core approach to AI is rooted in data privacy and relevance. Rather than drawing on the open internet, Ellis is trained exclusively on an individual client organization’s own isolated data repository, including its help documentation, case notes, and internal documents. This ensures that answers are grounded in the organization’s specific context and expertise while maintaining strict client confidentiality.
  • Relativity: Relativity’s e-discovery platform is made available to justice-focused organizations through its “Justice for Change” program. The platform includes powerful generative AI features like AIR for Review, which can analyze hundreds of thousands of documents to identify key people, terms, and events in an investigation. It also features integrated translation tools that support over 100 languages, including right-to-left languages like Hebrew, allowing legal teams to seamlessly work with multilingual case documents within a single, secure environment.

These tools represent a leap in technological capability. They all show the growing ability for AI to help legal teams synthesize info, work with documents, conduct research, produce key work product, and automate workflows. But how do we go from tech tools to real-world impact, solutions that are deployed at scale and get to high performance numbers? The Summit moved from tech demos to case studies to get to accounts of how to get to value and impact.

From Pilots to Impact: AI in Action Across the Justice Sector

In the second half of Day 1, the Summit moved beyond product demonstrations to showcase a series of compelling case studies from across the justice sector. These presentations offered proof points of how organizations are already leveraging AI to serve more people, improve service quality, and create new efficiencies, delivering concrete value to their clients and communities today.

  • Legal Aid Society of Middle Tennessee & The Cumberlands — Automating Expungement Petitions: The “ExpungeMate” project was created to tackle the manual, time-consuming process of reviewing criminal records and preparing expungement petitions. By building a custom GPT to analyze records and an automated workflow to generate the necessary legal forms, the organization dramatically transformed its expungement clinics. At a single event, their output surged from 70 expungements to 751. This newfound efficiency freed up attorneys to provide holistic advice and enabled a more comprehensive service model that brought judges, district attorneys, and clerks on-site to reinstate driver’s licenses and waive court debt in real-time.
  • Citizens Advice (UK) — Empowering Advisors with Caddy: Citizens Advice developed Caddy (Citizens Advice Digital Assistant), an internal chatbot designed to support its network of advisors, particularly new trainees. Caddy uses a Retrieval-Augmented Generation (RAG), a method that grounds the AI’s answers in a private, trusted knowledge base to ensure accuracy and prevent hallucination. A key feature is its “human-in-the-loop” workflow, where supervisors can quickly validate answers before they are given to clients. A six-week trial demonstrated significant impact, with the evaluation found that Caddy halved the response time for advisors seeking supervisory support, unlocking capacity to help thousands more people.
  • Frontline Justice — Supercharging Community Justice Workers To support its network of non-lawyer “justice workers” in Alaska, Frontline Justice deployed an AI tool designed not just as a Q&A bot, but as a peer-to-peer knowledge hub. While the AI provides initial, reliable answers to legal questions, the system empowers senior justice workers to review, edit, and enrich these answers with practical, on-the-ground knowledge like local phone numbers or helpful infographics. This creates a dynamic, collaborative knowledge base where the expertise of one experienced worker in a remote village can be instantly shared with over 200 volunteers across the state.
  • Lone Star Legal Aid — Building a Secure Chatbot Ecosystem Lone Star Legal Aid embarked on an ambitious in-house project to build three distinct chatbots on a secure RAG architecture to serve different user groups. One internal bot, LSLAsks, is for administrative information in their legal aid group. Their internal bot for legal staff, Juris, was designed to centralize legal knowledge and defeat the administrative burden of research. A core part of their strategy involved rigorous A/B testing of four different search models (cleverly named after the Ninja Turtles) to meticulously measure accuracy, relevancy, and speed, with the ultimate goal of eliminating hallucinations and building user trust in the system.
  • People’s Law School (British Columbia) — Ensuring Quality in Public-Facing AI The team behind the public-facing Beagle+ chatbot shared their journey of ensuring high-quality, reliable answers for the public. Their development process involved intensive pre- and post-launch evaluation. Before launch, they used a 42-question dataset of real-world legal questions to test different models and prompts until they achieved 99% accuracy. After launch, a team of lawyers reviewed every single one of the first 5,400 conversations to score them for safety and value, using the findings to continuously refine the system and maintain its high standard of quality.

These successful implementations offered more than just inspiration; they surfaced a series of critical strategic debates that the entire access to justice community must now navigate.

Lessons Learned and Practical Strategies from the First Generation of AI+A2J Work

A consistent “lesson learned” from Day 1 was that legal aid AI only works when it’s treated as mission infrastructure, not as a cool add-on. Leaders emphasized values as practical guardrails: put people first (staff + clients), keep the main thing the main thing (serving clients), and plan for the long term — especially because large legal aid organizations are “big ships” that can’t pivot overnight.

Smart choice of projects: In practice, that means choosing projects that reduce friction in frontline work, don’t distract from service delivery, and can be sustained after the initial burst of experimentation.

An ecosystem of specific solutions: On the build side, teams stressed scoping and architecture choices that intentionally reduce risk. One practical pattern was a “one tool = one problem” approach, with different bots for different users and workflows (internal legal research, internal admin FAQs, and client-facing triage) rather than trying to make a single chatbot do everything.

Building for Security & Privacy forward solutions: Security and privacy were treated as design requirements, not compliance afterthoughts — e.g., selecting an enterprise cloud environment already inside the organization’s security perimeter and choosing retrieval-augmented generation (RAG) to keep answers grounded in verified sources.

Keeping Knowledge Fresh: Teams also described curating the knowledge base (black-letter law + SME guidance) and setting a maintenance cadence so the sources stay trustworthy over time.

Figure out What You’re Measuring & How: On evaluation, Day 1 emphasized that “accuracy” isn’t a vibe — you have to measure it, iterate, and keep monitoring after launch. Practical approaches included: (1) building a small but meaningful test set from real questions, (2) defining what an “ideal answer” must include, and (3) scoring outputs on safety and value across model/prompt/RAG variations.

Teams also used internal testing with non-developer legal staff to ask real workflow questions, paired with lightweight feedback mechanisms (thumbs up/down + reason codes) and operational metrics like citations used, speed, and cost per question. A key implementation insight was that some “AI errors” are actually content errors — post-launch quality improved by fixing source content (even single missing words) and tightening prompts, supported by ongoing monitoring.

Be Ready with Policies & Governance: On deployment governance, teams highlighted a bias toward containment, transparency, and safe failure modes. One practical RAG pattern: show citations down to the page/section, display the excerpt used, and if the system can’t answer from the verified corpus, it should say so — explicitly.

There was also a clear warning about emerging security risks (especially prompt injection and attack surfaces when tools start browsing or pulling from the open internet) and the need to think about cybersecurity as capability scales from pilots to broader use. Teams described practical access controls (like 2FA) and “shareable internal agents” as ways to grow adoption without losing governance.

Be Ready for Data Access Blocks: Several Day 1 discussions surfaced the external blockers that legal aid teams can’t solve alone — especially data access and interoperability with courts and other systems.

Even when internal workflows are ready, teams run into constraints like restrictions on scraping or fragmented, jurisdiction-specific data practices, which makes replication harder and increases costs for every new deployment. That’s one reason the “lessons learned” kept circling back to shared infrastructure: common patterns for grounded knowledge, testing protocols, security hardening, and the data pathways needed to make these tools reliable in day-to-day legal work.

Strategic Crossroads: Key Debates Shaping the Future of the AI+A2J Ecosystem

The proliferation of AI has brought the access to justice community to a strategic crossroads. The Summit revealed that organizations are grappling with fundamental decisions about how to acquire, build, and deploy this technology. The choices made in the coming years will define the technological landscape of the sector, determining the cost, accessibility, and control that legal aid organizations have over their digital futures.

The Build vs. Buy Dilemma

A central tension emerged between building custom solutions and purchasing sophisticated off-the-shelf platforms. We might end up with a ‘yes and’ approach, that involves both.

The Case for Building:

Organizations like Maryland Legal Aid and Lone Star Legal Aid are pursuing an in-house development path. This is not just a cost-and-security decision but a strategic choice about building organizational capacity.

The primary drivers are significantly lower long-term costs — Maryland Legal Aid reported running their custom platform for their entire staff for less than $100 per month — and enhanced data security and privacy, achieved through direct control over the tech stack and zero-data-retention agreements with API providers.

Building allows for the precise tailoring of tools to unique organizational workflows and empowers staff to become creators.

The Case for Buying:

Conversely, presentations from Relativity, Harvey, Thomson Reuters, vLex/Clio, and others showcased the immense power of professionally developed, pre-built platforms. The argument for buying centers on leveraging cutting-edge technology and complex features without the significant upfront investment in hiring and maintaining an in-house development team.

This path offers immediate access to powerful tools for organizations that lack the capacity or desire to become software developers themselves.

Centralized Expertise vs. Empowered End-Users

A parallel debate surfaced around who should be building AI applications. The traditional model, exemplified by Lone Star Legal Aid, involves a specialized technical team that designs and develops tools for the rest of the organization.

In contrast, Maryland Legal Aid presented a more democratized vision, empowering tech-curious attorneys and paralegals to engage in “vibe coding.”

This approach envisions non-technical staff becoming software creators themselves, using new, user-friendly AI development tools to rapidly build and deploy solutions. It transforms end-users into innovators, allowing legal aid organizations to “start solving their own problems” fast, cheaply, and in-house.

Navigating the Role of Big Tech in Justice Services

The summit highlighted the inescapable and growing role of major technology companies in the justice space. The debate here centers on the nature of the engagement.

One path involves close collaboration, such as licensing tools like Notebook LM from Google or leveraging APIs from OpenAI to power custom applications.

The alternative is a more cautious approach that prioritizes advocacy for regulation, taxation and licensing legal orgs’ knowledge and tools, and the implementation of robust public interest protections to ensure that the deployment of large-scale AI serves, rather than harms, the public good.

These strategic debates are shaping the immediate future of legal technology, but the summit also issued a more profound challenge: to use this moment not just to optimize existing processes, but to reimagine the very foundations of justice itself.

AI Beyond Automation: Reimagining the Fundamentals of the Justice System

The conversation at the summit elevated from simply making the existing justice system more efficient to fundamentally transforming it for a new era.

In a thought-provoking remote address, Professor Richard Susskind challenged attendees to look beyond the immediate applications of AI and consider how it could reshape the core principles of dispute resolution and legal help. This forward-looking perspective urged the community to avoid merely automating the past and instead use technology to design a more accessible, preventative, and outcome-focused system of justice.

The Automation Fallacy

Susskind warned against what he termed “technological myopia” — the tendency to view new technology only through the lens of automating existing tasks. He argued that simply replacing human lawyers with AI to perform the same work is an uninspired goal. Using a powerful analogy, he urged the legal community to avoid focusing on the equivalent of “robotic surgery” (perfecting an old process) and instead seek out the legal equivalents of “non-invasive therapy” and “preventative medicine” — entirely new, more effective ways to achieve just outcomes.

Focusing Upstream

This call to action was echoed in a broader directive to shift focus from downstream dispute resolution to upstream interventions. The goal is to leverage technology and data not just to manage conflicts once they arise, but to prevent them from escalating in the first place. This concept was vividly captured by Susskind’s metaphor of a society that is better served by “putting a fence at the top of the cliff rather than an ambulance at the bottom.”

The Future of Dispute Resolution

Susskind posed the provocative question, “Can AI replace judges?” but quickly reframed it to be more productive. Instead of asking if a machine can replicate a human judge, he argued the focus should be on outcomes: can AI systems generate reliable legal determinations with reasons?

He envisioned a future, perhaps by 2030, where citizens might prefer state-supported, AI-underpinned dispute services over traditional courts. In this vision, parties could submit their evidence and arguments to a “comfortingly branded” AI system that could cheaply, cheerfully, and immediately deliver a conclusion, transforming the speed and accessibility of justice.

Achieving such ambitious, long-term visions requires more than just technological breakthroughs; it demands the creation of a practical, collaborative infrastructure to build and sustain this new future.

Building Funding and Capacity for this Work

On the panel about building a National AI + A2J ecosystem, panelists discussed how to increase capacity and impact in this space.

The Need to Make this Space Legible as a Market

The panel framed the “economics” conversation as a market-making challenge: if we want new tech to actually scale in access to justice, we have to make the space legible — not just inspiring. There could be a clearer market for navigation tech in low-income “fork-in-the-road” moments. The panel highlighted that the nascent ecosystem needs three things to become investable and durable:

  • clearly defined problems,
  • shared infrastructure that makes building and scaling easier, and
  • business models that sustain products over time.

A key through-line in the panel’s commentary was: we can’t pretend grant funding alone will carry the next decade of AI+A2J delivery. Panelists suggested we need experimentation to find new payers — for example, employer-funded benefits and EAP dollars, or insurer/health-adjacent funding tied to social determinants of health — paired with stronger evidence that tools improve outcomes. This is connected to the need for shared benchmarks and evaluation methods that can influence how developers build and how funders (and institutions) decide what to back.

A Warning Not to Build New Tech on Bad Processes

The panel also brought a grounding reality check: even the best tech will underperform — or do harm — if it’s layered onto broken processes. Tech projects where tech sat on top of high-default systems contributed to worse outcomes.

The economic implication was clear: funders and institutions should pay for process repair and procedural barrier removal as seriously as they pay for new tools, because the ROI of AI depends on the underlying system actually functioning.

The Role of Impact Investing as a new source of capital

Building this ecosystem requires a new approach to funding. Kate Fazio framed the justice gap as a fundamental “market failure” in the realm of “people law” — the everyday legal problems faced by individuals. She argued that the two traditional sources of capital are insufficient to solve this failure: traditional venture capital is misaligned, seeking massive returns that “people law” cannot generate, while philanthropy is vital but chronically resource-constrained.

The missing piece, Fazio argued, is impact investing: a form of patient, flexible capital that seeks to generate both a measurable social impact and a financial return. This provides a crucial middle ground for funding sustainable, scalable models that may not offer explosive growth but can create enormous social value. But she highlighted a stark reality: of the 17 UN Sustainable Development Goals, Goal 16 (Peace, Justice, and Strong Institutions) currently receives almost no impact investment capital. This presents both a monumental challenge and a massive opportunity for the A2J community to articulate its value and attract a new, powerful source of funding to build the future of justice.

This talk of new capital, market-making, and funding strategies started to point the group to a clear strategic imperative. To overcome the risk of fragmented pilots and siloed innovation, the A2J community must start coalescing into a coherent ecosystem. This means embracing collaborative infrastructure, which can be hand-in-hand with attracting new forms of capital.

By reframing the “market failure” in people law as a generational opportunity for impact investing, the sector can secure the sustainable funding needed to scale the transformative, preventative, and outcome-focused systems of justice envisioned throughout the summit.

Forging an AI+A2J Ecosystem: The Path to Sustainable Scale and Impact

On Day 2, we challenged groups to envision how to build a strong AI and A2J development, evaluation, and market ecosystem. They came up with so many ideas, and we try to capture them below. Much of it is about having common infrastructure, shared capacity, and better ways to strengthen and share organic DIY AI tools.

A significant risk facing the A2J community is fragmentation, a scenario where “a thousand pilots bloom” but ultimately fail to create lasting, widespread change because efforts are siloed and unsustainable. The summit issued a clear call to counter this risk by adopting a collaborative ecosystem approach.

The working groups on Day 2 highlighted some of the key things that our community can work on, to build a stronger and more successful A2J provider ecosystem. This infrastructure-centered strategy emphasizes sharing knowledge, resources, and infrastructure to ensure that innovations are not only successful in isolation but can be sustained, scaled, and adapted across the entire sector.

Throughout the summit, presenters and participants highlighted the essential capacities and infrastructure that individual organizations must develop to succeed with AI. Building these capabilities in every single organization is inefficient and unrealistic. An ecosystem approach recognizes the need for shared infrastructure, including the playbooks, knowledge/data standards, privacy and security tooling, evaluation and certification, and more.

Replicable Playbooks to Prevent Parallel Duplication

Many groups in the Summit called for replicable solutions playbooksthat go beyond sharing repositories on Github and making conference presentations, and getting to the teams and resources that can help more legal teams replicate successful AI solutions and localize them to their jurisdiction and organization.

A2J organizations don’t just need inspiration — they need proven patterns they can adopt with confidence. Replicable “how-tos” turn isolated success stories into field-level capability: how to scope a use case, how to choose a model approach, how to design a safe workflow, how to test and monitor performance, and how to roll out tools to staff without creating chaos. These playbooks reduce the cost of learning, lower risk, and help organizations move from pilots to sustained operations.

Replicable guidance also helps prevent duplication. Right now, too many teams are solving the same early-stage problems in parallel: procurement questions, privacy questions, evaluation questions, prompt and retrieval design, and governance questions. If the field can agree on shared building blocks and publish them in usable formats, innovation becomes cumulative — each new project building on the last instead of starting over.

A Common Agenda of What Tasks-Issues to Build Solutions for

Without a shared agenda, the field risks drifting into fragmentation: dozens of pilots, dozens of platforms, and no cumulative progress. A common agenda does not mean one centralized solution — it means alignment on what must be built together, what must be measured, and what must be stewarded over time. It creates shared language, shared priorities, and shared accountability across courts, legal aid, community organizations, researchers, funders, and vendors.

This is the core reason the Legal Design Lab held the Summit: to convene the people who can shape that shared agenda and to produce a practical roadmap that others can adopt. The goal is to protect this moment from predictable failure modes — over-chill, backlash, duplication, and under-maintained tools — and instead create an ecosystem where responsible innovation compounds, trust grows, and more people get real legal help when they need it.

Evaluation Protocols and Certifications

Groups also called for more, easier evaluation and certification. They want high-quality, standardized methods for evaluation, testing, and long-term maintenance.

In high-stakes legal settings, “seems good” is not good enough. The field needs clear definitions of quality and safety, and credible evaluation protocols that different organizations can use consistently. This doesn’t mean one rigid standard for every tool — but it does mean shared expectations: what must be tested, what must be logged, what harms must be monitored, and what “good enough” looks like for different risk levels.

Certification — or at least standard conformance levels — can also shift the market. If courts and legal aid groups can point to transparent evaluation and safety practices, then vendors and internal builders alike have a clear target. That reduces fear-driven overreaction and replaces it with evidence-driven decision-making. Over time, it supports responsible procurement, encourages better products, and protects the public by making safety and accountability visible.

In addition, creating legal benchmarks for the most common & significant legal tasks can push LLM developers to improve their foundational models for justice use cases

Practical, Clear Privacy Protections

A block for many of the possible solutions is the safe use of AI with highly confidential, risky data. Privacy is not a footnote in A2J — it is the precondition for using AI with real people. Many of the highest-value workflows involve sensitive information: housing instability, family safety, immigration status, disability, finances, or criminal history. If legal teams cannot confidently protect client data, they will either avoid the tools entirely or use them in risky ways that expose clients and organizations to harm.

What is needed is privacy-by-design infrastructure: clear rules for data handling, retention, and access; secure deployment patterns; strong vendor contract terms; and practical training for staff about what can and cannot be used in which tools. The Summit is a place to align on what “acceptable privacy posture” should look like across the ecosystem — so privacy does not become an innovation-killer, and innovation does not become a privacy risk.

More cybersecurity, testing, reliability engineering, and ongoing monitoring

Along with privacy risks, participants noted that many of the organic, DIY solutions are not prepared for cybersecurity risks. As AI tools become embedded in legal workflows, they become targets — both for accidental failures and deliberate attacks. Prompt injection, data leakage, insecure integrations, and overbroad permissions can turn a helpful tool into a security incident. And reliability matters just as much as brilliance: a tool that works 80% of the time may still be unusable in high-stakes practice if the failures are unpredictable.

The field needs a stronger norm of “safety engineering”: threat modeling, red-teaming, testing protocols, incident response plans, and ongoing monitoring after deployment. This is also where shared infrastructure helps most. Individual organizations should not each have to invent cybersecurity practices for AI from scratch. A common set of testing and security baselines would let innovators move faster while reducing systemic risk.

Inter-Agency/Court Data Connections

Many groups need to call up and work with data from other agencies — like court docket files and records, other legal aid groups’ data, and more — in order to get highly effective, AI-powered workflows

Participants called for more standards and data contracts that can facilitate systematic data access, collection, and preparation. Many of the biggest A2J bottlenecks are not about “knowing the law” — they’re about navigating fragmented systems. People have to repeat their story across multiple offices, programs, and portals. Providers can’t see what happened earlier in the journey. Courts don’t receive information in consistent, structured ways. The result is duplication, delay, and drop-off — exactly where AI could help, but only if the data ecosystem supports it.

Many of the biggest A2J bottlenecks are not about “knowing the law” — they’re about navigating fragmented systems. People have to repeat their story across multiple offices, programs, and portals. Providers can’t see what happened earlier in the journey. Courts don’t receive information in consistent, structured ways. The result is duplication, delay, and drop-off — exactly where AI could help, but only if the data ecosystem supports it.

Data Contracts for Interoperable Knowledge Bases

Many local innovators are starting to build out structured, authoritative knowledge on court procedure, forms and documents, strategies, legal authorities, service directories, and more. This knowledge data is built to power their local legal AI solutions, but right now it is stored and saved in unique local ways.

This investment in local authoritative legal knowledge bases makes sense. LLMs are powerful, but they are not a substitute for authoritative, maintainable legal knowledge. The most dependable AI systems in legal help will be grounded in structured knowledge: jurisdiction-specific procedures, deadlines, forms, filing rules, court locations, service directories, eligibility rules, and “what happens next” pathways.

But the worry among participants is that all of these highly localized knowledge bases will be one-off for a specific org or solution. Ideally, when teams are investing in building these local knowledge bases, it can follow some key standard rules so it can perform well and it can be updated, audited, and reused across tools and regions.

This is why knowledge bases and data exchanges are central to the ecosystem approach. Instead of each organization maintaining its own isolated universe of content, we can build shared registries and common schemas that allow local control while enabling cross-jurisdiction learning and reuse. The aim is not uniformity for its own sake — it’s reliability, maintainability, and the ability to scale help without scaling confusion.

More training and change management so legal teams are ready

Even the best tools fail if people don’t adopt them in real workflows. Legal organizations are human systems with deeply embedded habits, risk cultures, and informal processes. Training and change management are not “nice to have” — they determine whether AI becomes a daily capability or a novelty used by a handful of early adopters.

What’s needed is practical, role-based readiness support: training for leadership on governance and procurement, training for frontline staff on safe use and workflow integration, and support for managers who must redesign processes and measure outcomes. The Summit is a step toward building a shared approach to readiness — so the field can absorb change without burnout, fragmentation, or loss of trust.

Building Capability & Lowering Costs of Development/Use

One of the biggest barriers to AI-A2J impact is that the “real” version of these tools — secure deployments, quality evaluation, integration into existing systems, and sustained maintenance — can be unaffordable when each court or legal aid organization tries to do it alone. The result is a familiar pattern: a few well-resourced organizations build impressive pilots, while most teams remain stuck with limited access, short-term experiments, or tools that can’t safely touch real client data.

Coordination is the way out of this trap. When the field aligns on shared priorities and shared building blocks, we reduce duplication and shift spending away from reinventing the same foundational components toward improving what actually matters for service delivery.

Through coordination, the ecosystem can also change the economics of AI itself. Shared evaluation protocols, reference architectures, and standard data contracts mean vendors and platform providers can build once and serve many — lowering per-organization cost and making procurement less risky. Collective demand can also create better terms: pooled negotiation for pricing, clearer requirements for privacy/security, and shared expectations about model behavior and transparency.

Just as importantly, coordinated open infrastructure — structured knowledge bases, service directories, and interoperable intake/referral data — reduces reliance on expensive bespoke systems by making high-value components reusable across jurisdictions.

The goal is not uniformity, but a commons: a set of shared standards and assets that makes safe, high-quality AI deployment feasible for the median organization, not just the best-funded one.

Conclusion

The AI + Access to Justice Summit is designed as a yearly convening point — because this work can’t be finished in a single event. Each year, we’ll take stock of what’s changing in the technology, what’s working on the ground in courts and legal aid, and where the biggest gaps remain. More importantly, we’ll use the Summit to move from discussion to shared commitments: clearer priorities, stronger relationships across the ecosystem, and concrete next steps that participants can carry back into their organizations and collaborations.

We are also building the Summit as a launchpad for follow-through. In the months after convening, we will work with participants to continue progress on common infrastructure: evaluation and safety protocols, privacy and security patterns, interoperable knowledge and data standards, and practical implementation playbooks that make adoption feasible across diverse jurisdictions. The aim is to make innovation cumulative — so promising work does not remain isolated in a single pilot site, but becomes reusable and improvable across the field.

We are deeply grateful to the sponsors who made this convening possible, and to the speakers who shared lessons, hard-won insights, and real examples from the frontlines.

Most of all, thank you to the participants — justice leaders, technologists, researchers, funders, and community partners — who showed up ready to collaborate, challenge assumptions, and build something larger than any single organization can create alone. Your energy and seriousness are exactly what this moment demands, and we’re excited to keep working together toward a better 2030.

Categories
Current Projects

Filing Fairness Toolkit

The Stanford Legal Design Lab & the Rhode Center on the Legal Profession have just released the Filing Fairness Toolkit.

The toolkit covers 4 areas, with diagnostics, maturity models, and actionable guidance for:

  1. improving Filing Technology Infrastructure
  2. building a healthy Filing Partner Ecosystem
  3. establishing good Technology Governance
  4. refining Forms & Filing Processes

This Toolkit is the product of several years of work, design sessions, collaborations with courts and vendors across the country, and stakeholder interviews. It is for court leaders, legal tech companies, legal aid groups, and government officials who are looking for practical guidance on how to make sure that people can find, complete, and file court forms.

Check out our diagnostic tool to see how your local court system measures up to national best practices in forms, efiling, and services.

We know efiling and court technology can be confusing (if not intimidating). We’ve worked hard to make these technical terms & processes more accessible to people beyond IT staff. Getting better efiling systems in place can unlock new opportunities for access to justice.

Please let us know if you have questions, ideas, and stories about making forms, efiling, and other court tech infrastructure more accessible, user-friendly, and impactful.

Read more at the Toolkit announcement page.

Categories
AI + Access to Justice Current Projects

Presentation to Indiana Coalition for Court Access

On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access.

This presentation was part of a larger discussion about research projects, a learning community of judges, and evidence-based court policy and rules changes. What can courts, legal aid, groups, and statewide justice agencies be doing to best serve people with legal problems in their communities?


Margaret’s presentation covered the initial user research that the lab has been conducting, about how different members of the public think about AI platforms in regards to legal problem-solving and how they use these platforms to deal with problems like evictions. The presentation also spotlit the concerning trends, mistakes, and harms around public use of AI for legal problem-solving, which justice institutions and technology companies should focus on in order to prevent consumer harms while harnessing the opportunity of AI to help people understand the law and take action to resolve their legal problems.

The discussion after the presentation covered topics like:

  • Is there a way for justice actors to build a more authoritative legal info AI model, especially with key legal information about local laws and rights, court procedures and timelines, court forms, and service organizations contact details? This might help the AI platforms, avoid mistaken, information or hallucinated details.
  • How could researchers measure the benefits and harms of AI provided legal answers, compared to legal expert-provided legal answers, compared to no services at all? Aside from anecdotes and small samples, is there a more deliberate way to analyze the performance of AI platforms, when it comes to answering peoples questions about the law, procedures, forms, and services? This might include systematically measuring how often these platforms make mistakes, categorizing exactly what the mistakes are, and estimating, or measuring how much harm emerges from these mistakes. A similar deliberate protocol might be done for the benefits that these platforms provide.
Categories
AI + Access to Justice Class Blog Current Projects

AI Goes to Court: The Growing Landscape of AI for Access to Justice

By Jonah Wu

Student research fellow at Legal Design Lab, 2018-2019

1. Can AI help improve access to civil courts?

Civil court leaders have a newly strong interest in how artificial intelligence can improve the quality and efficiency of legal services in the justice system, especially for problems that self-represented litigants face [12345]. The promise is that artificial intelligence can address the fundamental crises in courts: that ordinary people are not able to use the system clearly or efficiently; that courts struggle to manage vast amounts of information; and that litigants and judicial officials often have to make complex decisions with little support.

If AI is able to gather and sift through vast troves of information, identify patterns, predict optimal strategies, detect anomalies, classify issues, and draft documents, the promise is that these capabilities could be harnessed for making the civil court system more accessible to people.

The question then, is how real these promises are, and how they are being implemented and evaluated. Now that early experimentation and agenda-setting have begun, the study of AI as a means for enhancing the quality of justice in the civil court system deserves greater definition. This paper surveys current applications of AI in the civil court context. It aims to lay a foundation for further case studies, observational studies, and shared documentation of AI for access to justice development research. It catalogs current projects, reflects on the constraints and infrastructure issues, and proposes an agenda for future development and research.

2. Background to the Rise of AI in the Legal System

When I use the term Artificial Intelligence, I distinguish it from general software applications that are used to input, track, and manage court information. Our basic criteria for AI-oriented projects is that the technology has capacity to perceive knowledge, make sense of data, generate predictions or decisions, translate information, or otherwise simulate intelligent behavior. AI does not include all court technology innovations. For example, I am not considering websites that broadcast information to the public; case or customer management systems that store information; or kiosks, apps, or mobile messages that communicate case information to litigants.

The discussion of AI in criminal courts is currently more robust than in civil courts. It has been proposed as a means to monitor and recognize defendants; support sentencing and bail decisions; and better assess evidence [3]. Because of the rapid rise of risk assessment AI in the setting of bail or sentencing, there has been more description and debate on AI [6]. There has been less focus on AI’s potential, or its concerns, in the civil justice system, including for family, housing, debt, employment, and consumer litigation. That said, there has been a robust discourse over the past 15 years of what technology applications and websites could be used by courts and legal aid groups to improve access to justice [7].

The current interest in AI for civil court improvements is in sync with a new abundance of data. As more courts have gathered data about administration, pleadings, litigant behavior, and decisions [1], it presents powerful opportunities for research and analytics in the courts, that can lead to greater efficiency and better design of services. Some groups have managed to use data to bring enormous new volumes of cases into the court system — like debt collection agencies, which have automated filings of cases against people for debt [8], often resulting in complaints that have missing or incorrect information and minimal, ineffective notice to defendants. If litigants like these can harness AI strategies to flood the court with cases, could the courts use its own AI strategies to manage and evaluate these cases and others — especially to better protect unwitting defendants against low-quality lawsuits?

The rise in interest in AI coincides with state courts experiencing economic pressure: budgets are cut, hours are reduced, and even some locations are closed [9]. Despite financial constraints, courts are expected to provide modern, digital, responsive services like in other consumer services. This presents a challenging expectation for the courts. How can they provide judicial services in sync with rapidly modernizing other service sectors — in finance, medicine, and other government bodies — within significant cost constraints? The promise of AI is that it can scale up quality services and improving efficiency, to improve performances and save costs [10].

A final background factor to consider is the growing concern over public perceptions of the judicial system. Yearly surveys indicate that communities find courts out of touch with the public, and with calls for greater empathy and engagement with “everyday people” [11]. Given that the mission of the court is to provide an avenue to lawful justice to constituents, if AI can help the court better achieve that mission without adding on averse risks, it would help the courts establish greater procedural and distributive justice for its litigants, and hopefully then bolster its legitimacy to the public and engagement with it.

3. What could be? Proposals in the Literature for AI for access to justice

What has the literature proposed on how AI techniques can address the access to justice crisis in civil courts? Over the past several decades, distinct use cases have been proposed for development. There is a mix of litigant-focused use cases (to help them understand the system and make stronger claims), and court-focused use cases (to help it improve its efficiency, consistency, transparency, and quality of services).

  • Answer a litigant’s questions about how the law applies to them. Computational law experts have proposed automated legal reasoning as a way to understand if a given case is in accordance with the law or not [12]. Court leaders also envision AI to help litigants conduct effective, direct research into how the law would apply to them [4,5]. Questions of how the law would apply to a given case lay on a spectrum of complexity. Questions that are more straightforwardly algorithmic (e.g., if a person exceeded a speed limit, or if a quantity or date is in an acceptable range) can be automated with little technical challenge [13]. Questions that have more qualitative standards, like whether it was reasonable, unconscionable foreseeable, or done in good faith, are not as easily automated — but they might be with greater work in deep learning and neural networks. Many propose that expert systems, or AI-powered chatbots might help litigants know their rights and make claims [14].
  • Analyze the quality of a legal claim and evidence. Several proposals are around making it easier to understand what has been submitted to court, and how a case has proceeded. Some exploratory work has pointed towards how AI could automatically classify a case docket, the chronological events in a case, in order that it could be understood computationally [15]. Machine learning could find patterns in claims and other legal filings, to indicate whether something has been argued well, whether the law supports it, and evaluate it versus competing claims [16].
  • Provide coordinated guidance for a person without a lawyer. Many have proposed focus on developing a holistic AI-based system to guide people without lawyers through the choices and procedure of a civil court case. One vision is of an advisory system that would help a person understand available forms of relief, helping them understand if they can meet the requirements, informing them of procedural requirements; and helping them to draft court documents [1718].
  • Predict and automate decisionmaking. Another proposal, discussed within the topic of online dispute resolution, is around how AI could either predict how a case will be decided (and thus give litigants a stronger understanding of their changes), or to actually generate a proposal of how a disputes should be settled [1920]. In this way, prediction of judicial decisions could be useful to access to justice. It could be integrated into online court platforms where people are exploring their legal options, or where they are entering and exchanging information in their case. The AI would help litigants to make better choices regarding how they file, and it would help courts expedite decision-making by either supporting or replacing human judges’ rulings.

4. What is happening so far? AI in action for access

With many proposals circulating about how AI might be applied for access to justice, where can we see these possibilities being developed and piloted with courts? Our initial survey identifies a handful of applications in action.

4.1. Predicting settlement arrangements, judicial decisions, and other outcomes of claims

One of the most robust areas of AI in access to justice work has been in developing applications to predict how a claim, case, or settlement will be resolved by a court. This area of predictive analytics has been demonstrated in many research projects, and in some cases have been integrated into court workflows.

In Australian Family Law courts, a team of artificial intelligence experts and lawyers have begun to develop Split-Up system, to use rules-based reasoning in concert with neural networks to predict outcomes for property disputes in divorce and other family law cases [21]. The Split Up system is used by judges to support their decision-making, by helping them to identify the assets of marriage that should be included in a settlement, and then establishing what percentage of the common pool each party should receive — which is a discretionary judicial choice based on factors including contributions, amount of resources, and future needs. The system incorporates 94 relevant factors to make its analysis, which uses neural network statistical techniques. The judge can then propose a final property order based on the system’s analysis. The system also seeks to make transparent explanations of its decision, so it uses Toulmin Argument structures to represent how it reached its predictions.

Researchers have created algorithms to predict Supreme Court and European Court of Human Rights decisions [222324]. They use natural language processing and machine learning to construct models that predict the courts’ decision with strong accuracy. Their predictions draw from the formal facts submitted in the case to identify what a likely outcome, and potentially even individual justices’ votes will be. This judicial decision prediction research can possibly used to offer predictive analytic tools to litigants, so they can better assess the strength of their claim and understand what outcomes they might face. Legal technology companies like Ravel and LexMachina [2526], claim that they can predict judges’ decision and case behavior, or the outcomes of an opposing party. The applications are mainly aimed at corporate-level litigation, rather than access to justice.

4.2. Detecting abuse and fraud against people the court oversees

Courts’ role in overseeing guardians and conservators means that they should be reducing financial exploitation of vulnerable people by those appointed to protect them. With particular concern for financial abuse of elderly by their conservators or guardians, a team in Utah began building an AI tool to identify likely fraud in the reported financial transactions that conservators or guardians submit to the court. The system, developed in concert with a Minnesota court system in a hackathon, would detect anomalies and fraud-related patterns, and send flag notifications to courts to investigate further [28].

4.3. Preventative Diagnosis of legal issues, matching to services, and automating relief

A robust branch of applications has been around using AI techniques to spot people’s legal needs (that they potentially did not know they had), and then either match them to a service provider or to automate a service for them, to help resolve their need. This approach has begun with the expungement use case — in which states have policies to help people clear their criminal record, but without widespread uptake. With this problem in mind, groups have developed AI programs to automatically flag who has a criminal record to clear, and then to streamline the expungement. help automate the expungement process for their region. In Maryland, Matthew Stubenberg from Maryland Volunteer Lawyers Service (now in Harvard’s A2J Lab) built a suite of tools to spot their organization’s clients’ problems, including overdue bills and criminal records that could be expunged. This tool helped legal aid attorneys diagnose their clients’ problems. Stubenberg also made the criminal record application public-facing, as MDExpungement, for anyone to automatically find if they have a criminal record and to submit a request to clear it [29].

Code for America is working inside courts to develop another AI application for expungement. They are work with the internal databases of California courts to automatically identify expunge eligible records, eliminating the need for individuals to apply for [30].

The authors, in partnership with researchers at Suffolk LIT Lab, are working on an AI application to automatically detect legal issues in people’s descriptions of their life problems, that they share in online forums, social media, and search queries [31]. This project involves labeling datasets of people’s problem stories, taken from Reddit and online virtual legal clinics, to then train a classifier to be able to automatically recognize what specific legal issue a person might have based on their story. This classifier could be used to power referral bots (that send people messages with local resources and agencies that could help them), or to translate people’s problem stories into actionable legal triage and advisory systems, as had been envisioned in the literature.

4.4. Analyzing quality of claims and citations

Considering how to help courts be more efficient in their analysis of claims and evidence, there are some applications — like the product Clerk from the company Judicata — that can read, analyze, and score submissions that people and lawyers make to the court [32]. These applications can assess the quality of a legal brief, to give clerks, judges, or litigants the ability to identify the source of the arguments, cross check them against the original, and possibly also find other related cases. In addition to improving the efficiency of analysis, the tool could be used for better drafting of submissions to the court — with litigants checking the quality of their pleadings before submitting them.

4.5. Active, intelligent case management

The Hebei High Court in China has reported the development of a smart court management AI, termed Intelligent Trial 1.0 system [33]. It automatically scans in and digitizes filings; it classifies documents into electronic files; it matches the parties to existing case parties; it identifies relevant laws, cases, and legal documents to be considered; it automatically generates all necessary court procedural documents like notices and seals; and it distributes cases to judges for them to be put on the right track. The system coordinates various AI tasks together into a workstream that can reduce court staff and judges’ workloads.

4.6. Online dispute resolution platforms and automated decision-making

Online dispute resolution platforms have grown around the United States, some of them using AI techniques to sort claims and propose settlements. Many ODR platforms do not use AI, but rather act as a collaboration and streamlining platform for litigants’ tasks. ODR platforms like Rechtwijzer, MyLaw BC, and the British Columbia Civil Resolution Tribunal, use some AI techniques to sort which people can use the platform to tackle a problem, and to automate decision-making and settlement or outcome proposal [34].

We also see new pilots of online dispute platforms in Australia, in the state of Victoria with its VCAT pilot for small claims (that is now in hiatus, awaiting future funding) — and in Utah, for its small claims in one place outside Salt Lake City.

These pilots are using platforms like Modria (part of Tyler Technology), Modron, or Matterhorn from Court Innovations. How much AI is part of these systems is not clear — it seems they are mainly platform for logging details and preferences, communicating between parties, and drafting/signing settlements (without any algorithm or AI tool making a decision proposal or crafting a strategy for parties). If the pilots are successful and become ongoing projects, then we can expect future iterations to possibly involve more AI-powered recommendations or decision tools.

5. Agenda for Development and Infrastructure of AI in access to justice

If an ecosystem of access to justice AI is to be accelerated, what is the agenda to guide the growth of projects? There is work to be done on the infrastructure of sharing data, defining ethics standards, security standards, and privacy policies. In addition, there is organizational and coalition-building work, to allow for more open innovation and cross-organization initiatives to grow.

5.1.Opening and standardizing datasets

Currently, the field of AI for access to justice is harmed by the lack of open, labeled datasets. Courts do hold relatively small datasets, but there are not standard protocols to make them available to the public or to researchers, nor are there labeled datasets to be used in training AI tools [35]. There are a few examples of labelled court datasets, like from the Board of Veterans Appeals [36]. A newly-announced US initiative, the National Court Open Data Standards Project, will promote standardization of existing court data, so that there can be more seamless sharing and cross-jurisdiction projects [37].

5.2.Making Policies to Manage Risks

There should be multi-stakeholder design of the infrastructure, to define an evolving set of guidance for issues around the following large risks that court administrators have identified as worries around new AI in courts [45].

  • Bias of possible Training Data Sets. Can we better spot, rectify, and condition inherent biases that the data sets might have, that we are using to train the new AI?
  • Lack of transparency of AI Tools. Can we create standard ways to communicate how an AI tool works, to ensure there is transparency to litigants, defendants, court staff, and others, so that there can be robust review of it?
  • Privacy of court users. Can we have standard redaction and privacy policies that prevent individuals’ sensitive information from being exposed [38]? There are several redaction software applications that use natural language processing to scan documents and automatically redact sensitive terms [3940].
  • New concerns for fairness. Will courts and the legal profession have to change how they define what ‘information versus advice’ is, as currently guide regulations about what types of technological help can be given to litigants? Also, if AI exposes patterns of arbitrary or biased decision-making in the courts, how will the courts respond to change personnel, organizational structures, or court procedures to better provide fairness?

For many of these policy questions, there are government-focused ethics initiatives that the justice system can learn from, as they define best practices and guiding principles for how to integrate AI responsibly into public, powerful institutions [424344].

6. Conclusion

This paper’s survey of proposals and applications for AI’s use for access to justice demonstrates how technology might be operationalized for social impact.

If there is more infrastructure-oriented work now, that establishes how courts can share data responsibly, and set new standards for privacy, transparency, fairness, and due process in regards to AI applications, this nascent set of projects may blossom into many more pilots over the next several years.

In a decade, there may be a full ecosystem of AI-powered courts, in which a person who faces a problem with eviction, credit card debt, child custody, or employment discrimination could have clear, affordable, efficient ways to use use the public civil justice system to resolve their problem. Especially with AI offering more preventative, holistic support to litigants, it might have anti-poverty effects as well, ensuring that the legal system resolves people’s potential life crises, rather than exacerbating them.

Categories
Current Projects Triage and Diagnosis

Legal screeners and intake for medical providers

Mobile apps aimed at non-legal service providers help them screen for legal problems for their clients.

For example there is an app specifically designed for use in medical-legal partnerships, in which users have come to a medical facility to deal with a medical problem.

The app can be used by a service provider at the clinic or hospital to screen the patient for legal issues that might be going on, and perhaps related to the health issues.

This type of software is beneficial because it provides expert knowledge and an easy-to-use fashion and it can streamline the screening process especially for those who are not experts in law.

Example of such a mobile app screener: from the Legal Aid Society of Louisville,

Legal Aid Society of Louisville (LAS) leveraged mobile technologies to develop a legal assessment tool for medical/legal partnerships that effectively screens low‐income patients for legal problems and alerts medical professionals of the need to refer patients to a legal partner for timely assistance. The “Law and Health Screening Tool” consists of an iPad application and a companion web-based survey system. It has been successfully piloted at the University of Louisville Pediatrics Children and Youth Clinic, a high-traffic urban clinic with a high poverty, diverse patient population.

Access Innovation - medical legal screener alert screen

The tool has four main functions:

  1. A “law and health survey,” which parents/guardians of patients complete using a tablet. This is a quick legal screen meant to be easily completed by parents while waiting to be seen at the clinic. The survey uses question branching, so that the response to one question determines the next question posed.
  2. An “alert” function, which electronically notifies MLP staff when a survey response indicates a possible health-related legal need. MLP staff may then retrieve contact information from the administrative website for follow-up.
  3. A “resource” function, whereby a “yes” response to certain questions triggers an offer of a relevant resource, such as information about utility assistance, foreclosure prevention services or free tax-preparation assistance and the earned income tax credit.
  4. A data collection and reporting function, which aggregates survey answers for reporting and monitoring purposes. These metrics provide insight into the legal needs of the clinic’s patient population and how MLP resources might be tailored to address them effectively.

The final report from LSC-TIG is here: TIG 11094: LAS Medical Legal Partnership App

Categories
Ideabook Work Product Tool

Access to Justice Tech: Concepts

Open Law Lab - Access to Justice tech

I’ve been searching around for the current landscape of actual initiatives & concept designs for tech tools to provide more access to justice.

I went back to a presentation, Assisted Legal Decisionmaking, by law professor Josh Blackman at Stanford last year. He showed some screenshots of legal products he’s been thinking of.

Open Law Lab - Access to Justice app 2

The concept app would allow the user to input their question. The app would respond with follow-up questions to nail the issue down more concretely. And then it would direct the user to the right resources. It follows the Expert System model, with guided interviews, that the A2J author and other access tech has relied upon.

Open Law Lab - Access to Justice app

Categories
Current Projects Procedural Guide

Citizenship Apps

Open Law Lab - Citizenship Apps
Citizenshipworks is building online and mobile apps aimed at non-citizens in the US — trying to give them resources and tutorials to navigate their way through citizenship.
They have checklists, expert system interviews, and tutorials to help the users along.
Damian Thompson of the Knight Foundation, writes of the new app.

I’m also proud to report on last week’s launch of the CitizenshipWorks mobile app for iOS and Android. Knight Foundation is the chief funder. Tony Lu, one of the app’s developers, says its combination of features is unique, integrating citizenship eligibility tools, such as a “trips calculator” and a document checklist; a legal directory; and study aids.

Those resources are immensely helpful for people navigating the path to citizenship. For example, green card holders who want to become citizens have to list every trip they’ve taken abroad on their applications. Imagine if you had to list every trip you’ve taken over the past five years. It would be a nightmare, especially if you didn’t keep systematic records. This is where the trips calculator can help.

Open Law Lab - Citizenshipworks - cw-collage-640

Categories
Advocates Current Projects

Apps to Manage Lawyers

Open Law Lab - viewabill - app to manage lawyer

Here’s an article by Jennifer Smith in the Wall Street Journal on new crops of apps that help clients find and monitor lawyers.  It mentions Viewabill (tracking how much their lawyers are charging them, in real-time); Rocket Lawyer’s mobile app (create basic legal documents and buy plans for low-cost access to advice); Attorney Proz (lists area lawyers, who have paid to be listed); Ask a Lawyer (ask lawyers in Kalamzoo about basic legal questions and get free answers to your e-mail); and soon to be a LegalZoom app.

Now that people use apps to bank, order food and even monitor eBay auction bids, it was only a matter of time before they called in the lawyers.

Appearing in app stores are programs to help people keep track of their attorneys’ bills, draft legal documents and locate nearby lawyers.

Attorneys are doing more work on smartphones and tablets, and they have a whole host of apps at their disposal to help look up case law, track client calls and even assist with depositions and jury selection.

But until recently, few options existed for clients who wished to track cases or seek advice using mobile devices. This new crop of apps aims to add transparency, and a measure of convenience, to the process.

One new app, Viewabill, lets people track how much their lawyers are charging them in real-time. The idea is to head off sticker shock when business owners and company lawyers open up their monthly bills.

The app acts as kind of a client nanny-cam. It captures information as law firms enter it into their billing systems and transmits it to clients’ mobiles and desktops. Users select how often they want to get updates, set alerts pegged to certain dollar thresholds and can mark questionable items. The app can also be used to track hours logged by accountants and other professional service providers.

The app is now being used by a handful of companies and law firms on a beta basis, with a wider launch planned this month, said Florida-based entrepreneur David Schottenstein, who co-founded the enterprise with an attorney friend, Robbie Friedman. Firms would pay an annual cost of $25 to $40 per matter, depending on volume, or $25,000 for unlimited use, said Mr. Schottenstein.

Screen Shot 2013-09-29 at 1.00.56 PM

“It helps them to understand what we do,” said Brian Baker, a bankruptcy lawyer at Ravin Greenberg LLC in New Jersey, which has been testing the app.

Errol Feldman, general counsel for JPay Inc., a Florida company that provides payment transfers and other services to inmates at corrections facilities, has been using Viewabill to make sure firms working to resolve contract disputes do so in a timely fashion.

Legal consultant Susan Hackett said the app was the latest example of a push for greater communication between lawyers and clients, who increasingly want more involvement in the work they assign to outside law firms.

Some companies with big in-house legal departments have already invested in software programs that let clients track the progress of legal matters or monitor law firm bills from their desktop computers. Such systems don’t come cheap, and not many clients use them yet—fewer than 20% of general counsel, according to a 2011 poll by the Association of Corporate Counsel.

Not all law firms may welcome the additional element of client control on the legal side of things. For Viewabill to work, for instance, lawyers have to enter their hours in a timely fashion.

“These technologies may scare people,” Ms. Hackett said. “But they are all productive parts of the march towards clients and lawyers having conversations in real time.”

This month online legal services company Rocket Lawyer Inc. is debuting a mobile app tailored to its customer base: consumers and small business owners who log on to the site to create basic legal documents or buy plans that provide low-cost access to legal advice.

Charley Moore, Rocket Lawyer’s founder and executive chairman, said more site traffic is coming from tablets and smartphones these days, reflecting his customers’ increasingly mobile bent. Many are small business owners who spend much of their time on the road, he said.

“Their office is their dashboard, so we have to deliver the tools,” Mr. Moore said.

Customers can use the app to create a non-disclosure agreement (more forms will soon be available) or modify existing documents they have already created. The app itself is free, and users can access some functions gratis.

Users can also locate nearby attorneys from Rocket Lawyer’s network—the app is integrated with Google GOOG +0.53% Maps—and punch in basic legal questions, although the reply, which is supposed to arrive within one business day, may not be swift as some might hope.

A handful of other apps offer similar services. Attorney Proz also lists area lawyers, who pay to be included. Ask a Lawyer, an app linked to Kalamazoo, Mich., law firm Willis Law, also offers free answers to basic legal questions, with replies sent to users’ email addresses.

Not to be outdone, online legal services company LegalZoom.com Inc., a Rocket Lawyer competitor, also has an app in the works, a company spokeswoman said.

A version of this article appeared March 11, 2013, on page B5 in the U.S. edition of The Wall Street Journal, with the headline: Apps Help Find Lawyers, And Keep an Eye on Them.

Categories
Current Projects Ideabook

New Generation of Tech for Access to Justice

A great article from Slate on Tech being used for Legal Aid & Access to Justice, with lots of specific examples of how SMS and other basic tech can give reminders, process updates, basic advice, and more lawyering to people who can’t afford lawyers.
The concepts:
  • Automated Call Back Systems from legal services to people who have reached out
  • SMS reminders from courts to litigants about what expectations are
  • Using data for legal services to better track their work & targets
  • Virtual office kits to provide legal services on the go, or outside of legal offices
  • An app that gives checklists to lawyers to ensure they’re catching all the issues

“Don’t Forget Your Court Date”

How text messages and other technology can give legal support to the poor.

Cellphone

It has been three years since the Great Recession ended, but the nation’s courthouses are still swamped with eviction cases, foreclosures, and debt collection suits. If overdue bills and late rent were crimes, all low-income tenants and debtors could get a public defender for free. Because those cases are civil suits, though, the state doesn’t provide an attorney. Which means that in civil court, most people don’t have a lawyer in their corner—even though their homes and financial stability are on the line.

What many do have in their back pockets, however, is a smartphone. And soon, they might be able to find some legal help there, too.

Like everyone else, lawyers for the poor are trying to do more with less, as government grants and private funding have dried up. Increasingly, that means turning to tech, using new tools to deliver information to clients, support volunteer lawyers, and improve their own systems. They’re using text messaging, automated call-backs, Web chats, and computer-assisted mapping.

A crush of new clients is pushing the growing reliance on technology, as the old systems just can’t keep up. For years, people seeking help have called their local legal services offices, only to wait on hold for 20 minutes or more. If someone has a pay-by-the-minute cellphone, as many low-income people do, that gets expensive fast. Many callers just give up, says Elizabeth Frisch, the co-executive director of Legal Aid of Southeastern Pennsylvania. So Frisch and her team are piloting an automated call-back system, using voice over IP, to reduce hold time and save those precious minutes.

Text messages can also improve efficiency. If courts sent SMS reminders to litigants, that would help move along cases that get postponed over and over when one party doesn’t show up, says Glenn Rawdon. Rawdon runs the technology grants program at the Legal Services Corp., the federal program that funds legal aid groups. A text could also help people remember to bring documents to meetings with their overworked lawyers. “It’s very time-consuming if they come to the appointment and say, ‘Oh yeah, I forgot to bring the papers,’ ” Rawdon says. And SMS can be used to deliver basic legal information, like what to look for when signing a lease, or the laws surrounding a wage claim. Legal aid groups in Georgia, New York, Washington, Illinois, and Pennsylvania are all piloting text-based campaigns this year.

For simple questions, technology can help deliver information to clients. For more complicated problems, only a lawyer will do. Unfortunately, there aren’t enough lawyers to go around. That’s particularly true outside of cities.

For example, 70 percent of Georgia’s lawyers are in the Atlanta metro area, although just under 30 percent of the state’s population lives there, according to the State Bar of Georgia. Six counties have no lawyers at all.

“It’s really expensive to deliver legal services in a rural area. Lawyers have to travel,” says Michael Monahan of Georgia Legal Services. Some lawyers at his organization cover six or seven counties, he says, working in the field three or four days a week.

So five years ago, Georgia Legal Services created virtual office kits, with laptops, portable printers, and scanners. They also got an assist from Sprint, which provided free air cards for mobile Internet access and an “extremely low data rate” for unlimited usage.

In Ohio, which also has big rural areas and a shortage of lawyers to serve them, Web chat can help volunteers reach more clients.

The system “allows us to address an imbalance between where the attorneys are and where our clients are,” says Kevin Mulder, executive director of Legal Aid of Western Ohio.

But logistics aren’t the only hurdle for volunteers. They can be “a little uncomfortable taking cases that are outside their practice area,” says David Lund, who runs the Legal Aid Service of Northeastern Minnesota.

If you’re used to dealing with real estate contracts, for instance, a Medicaid case can be intimidating. So he’s developing a set of checklists for specific issues, optimized for tablets, that lawyers can use when they’re volunteering.

They’ll use it at the start of a case, as they’re laying out a client’s options, and at potential settlements, to make sure that they haven’t missed anything crucial. In eviction cases, for example, a landlord can get a judgment of possession. This allows the tenant to leave without paying back rent, but it’s still a judgment against him, which means it can jeopardize eligibility for future subsidized housing, like Section 8. An experienced landlord-tenant lawyer would know that. An occasional volunteer would not. Which is where the checklist comes in.

Some things are best left to full-time legal aid lawyers. But since there are so few, groups are using data analysis and mapping to better focus their scarce resources. Prairie State Legal Services in Rockford, Ill., is using its “incredible mass of data” to develop a mapping project, plotting addresses and legal needs. Director Michael O’Connor says this will help them answer questions like, “Are there clusters in certain communities where lots of people are facing issues with access to public benefits, or substandard housing?” Armed with that information, his staff can do targeted outreach campaigns or ramp up for litigation.

No one thinks technology is a cure-all. Even the best app or website can’t stand next to you in front of a judge, responding to the opposing counsel.

And despite these promising tools, unmet need is enormous. Many clients want more support than they can get from an app or a chat, but limited funds make that unlikely. “For a large percentage of those folks, [help via technology] will be it. That will be the most that we will be able to offer,” says Deb Jennings, who manages a phone helpline at Advocates for Basic Legal Equality in Toledo, Ohio. And the use of new tech tools is in the early stages—many projects are somewhere between concept and beta.

The tools that are in use show great promise. Groups across the country have developed self-help websites, and they’ve been hugely popular. In 2012 so far, more than 3 million people downloaded resources from LawHelp.org, a nonprofit site that offers legal information and legal aid referrals. Through an affiliated site, people can answer simple questions and produce documents ready to file in court. More than 300,000 people have created documents this year, for things like wills, leases, and custody agreements.

In an ideal world, everyone who needs one would have a lawyer. But few people know better than lawyers for the poor just how far from ideal this world is.

Relying on technology “is a bit waving the white flag and saying we acknowledge we’re not going to help everybody, so here’s a second best solution,” O’Connor says. “And it is second best, but it is at least providing help to some people who otherwise wouldn’t get anything.”

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

Categories
Ideabook Triage and Diagnosis

What about a WebMD for law?

For the excellent Legal Tech class I’m taking at Stanford Law School on the future of legal technology, I am proposing to build a WebMD for law.

My central question is ‘how might we build tech that could help a lay person diagnose their own legal problems’? I am asking it because most legal technology currently is being built for a few audience segments:

1) Big Law lawyers who want to cut costs and make their practice more efficient

2) Law students to do research and construct arguments better

3) Fairly well-educated consumers who want to accomplish discrete tasks — making a will, incorporating a business, getting a marriage or divorce agreement, electronically signing a contract

I am interested in getting legal services & counsel to people not in these three categories: those people who lack the legal grounding to know what their legal problems are, and how they can go about fixing them.

The target audience for ‘a WebMD for law’ would be people with a legal itch — they have a problem in their life that is worrying them, and they think it might be tied up with something to do with the law.  Their dentist botched a root canal. Their landlord is asking for more money. Their employment interviewers are asking about immigration status. A policeman confiscated their camera at a protest.

There are many services currently online for people to look up statutes, cases, commentaries, and other sources of law. See Legal Information Institute, Google Scholar, PlainSite, Ravel Law, Wikipedia — and if you have money in your pocket, WestLaw and Lexis. But these legal tech products are not useful to a person unless she first knows what she is looking for.

There is a gaping need for a technology that can bridge the lay person from ‘I have a problem’ to ‘What is the law to help me with my problem’.  This technology would provide the lay person with the understanding: ‘This is my problem in legal terms. These are the specific legal matters that are at issue’. It would do what first year law students spend all their free time doing: issue-spot.

I am interested in this for a wide variety of ‘lay people’. It would be best to support those who are most removed from the legal system, people who don’t have the money, time, proximity, or knowledge to access legal counsel & services. But it would also have enormous benefit to the many people who are well-educated, living relatively well, but when it comes to the law — feel totally out of their depth, don’t know a tort from a criminal action, and can’t navigate the jargon of the legal system.

These slides are from a presentation, “Is There a WebMD Effect: open access to law, the public, and the legal profession” up on SlideShare by a lawyer, T. Bruce, who was also thinking about the possibility of a WebMD for law.  The presentation highlights that there is a need to serve this ‘latent legal market’ — while also warning that giving greater access to legal diagnosis tools may induce ‘cyberchondria’ in the general public.

People could find more legal issues in their lives than they actually have (or than actually matter), and this could have negative effects — overtaxing the legal system with more frivolous lawsuits, inducing people to seek costly legal help when in fact they do not need it, giving false confidence to people that they can represent themselves with their new legal knowledge.

This 15th century concern — that ‘reading the law without right understanding’ might lead to people harming themselves with legal knowledge — cannot be ignored. But it does not outweigh the need for a technology that can bridge people’s legal itchiness with a capacity to use the many legal resources now available online.