Categories
AI + Access to Justice Class Blog Current Projects Design Research

Why Legal Help AI R+D Gets Stuck

And why we need shared infrastructure strategy to overcome these choke points….

By Margaret Hagan, first published on Legal Design and Innovation

So many people in the legal help field are excited about AI’s potential. Legal aid organizations, courts, law libraries, and technologists are launching chatbots, intake tools, document assistants, and triage systems.

But from my conversations & experiences over the past few months, a striking pattern keeps emerging: projects that should take weeks take months. Prototypes that demo well in controlled settings fall apart when real users arrive. Tools that work for one organization cannot be transplanted to another without rebuilding from scratch.

How can we get from promising ideas to high-quality, well-executed, safe pilots?

Why? The answer is not primarily about model capabilities or funding (though both matter). The answer is a set of structural choke points. These are recurring technical, institutional, and knowledge barriers that slow down R&D teams independently, because each team runs into them alone — and then has to solve it on their own.

This piece presents an initial array of 10 common choke points I have heard about from colleagues, experienced in our own Legal Design Lab work, and observed among student projects. As I lay them out, I also explain why they derail legal AI R&D. And then I point to what we could be doing (as a legal help community) to overcome them together.

In particular, I go back to what I’ve been talking about for the past several years — a shared infrastructure strategy to take on & solve challenges together, so they don’t block every team forever. We’ve been crystallizing that strategy into the Legal Help Commons— a coordinated effort to build the shared resources, reference architectures, toolkits, benchmarks, and community infrastructure that the field needs so that every team isn’t rebuilding the same foundations from scratch. (More on the Commons at this write-up from a few days ago…)

The big takeaway of this essay, though, is:

Legal AI R&D is slow not because the foundation models are weak at our teams’ tasks, but because the surrounding infrastructure — data, legal logic, compliance, and classification —often isn’t prepped to work with the models and we’re not clear about how to get to the right level of performance to go live. Our domain is also highly regulated, high-stakes for users, and with big risks for providers and consumers. We don’t have a lot of common knowledge or tools about how to mitigate these risks, get to great & consistent performance, and get the models to perform we want them to. Shared infrastructure could turn months of rework, frustration, mis-directions, and other R&D detours into a more straightforward sprint.

These are recurring R&D blocks — but we can get past them if we work together.

Why Does It Take So Long to Get to High-Performing, Pilot-Ready Tools?

Before diving into individual choke points, it’s worth understanding the structural reasons thatlegal AI R&D moves slowly. Four forces compound to make every project harder than it looks from the outside.

Attorney Logic Is Unwritten

The most valuable knowledge in legal help — how an experienced attorney actually thinks through a case, what questions they ask, what red flags they watch for, what judgment calls they make — is overwhelmingly tacit. It lives in practitioners’ heads, not in documents. You cannot scrape it from a website or extract it from a statute. Even if the ‘law’ is written down, this often is not clear enough, accurate enough, or detailed enough to tell us what a person’s options are, what they need to do, and what their best strategy should be. The only reliable way to surface it is through extensive, structured conversations and iterative testing with subject-matter experts. This is slow, expensive, and doesn’t scale the way most technologists expect.

Source Data Is Messy and Unpredictable

Where does this knowledge live? It exists, but it’s not in good or accessible shape. Legal help content — the guides, forms, rules, eligibility criteria, and service directories that AI tools need to draw on — exists in dozens of formats across hundreds of organizations.

  • PDFs with no machine-readable structure.
  • Drupal sites with inconsistent tagging.
  • Spreadsheets maintained by a single person who left the organization.
  • Intake forms with fields that mean different things in different counties.

Every AI project begins with a data-wrangling phase that is far longer and more painful than anyone budgets for.

Unexpected Edge Cases Don’t Get Documented

Related to both the unwritten logic and the messy data — there are actually big risks and trouble around getting unusual or less frequent cases correctly helped. Legal problems are inherently complex and come in many, strange forms. A person facing eviction may also have a disability accommodation claim, a domestic violence protection order, unpaid utility liens, and immigration status concerns — all intersecting. These edge cases are where AI tools most often fail, and they’re also the cases where failure carries the highest stakes. But edge cases are, by definition, poorly documented. They surface only through real usage, and the field lacks systematic ways to capture, share, and learn from them.

Detailed Tasks Are Harder Than They Look

Even tasks that sound straightforward — pulling data from a case record, analyzing income documents, drafting a form response — turn out to involve many sub-decisions that require legal judgment. Is this income source countable? Does this record indicate an active case or a closed one? Which form version applies in this courthouse? Each sub-decision is a potential failure point, and each requires domain expertise to resolve correctly. It often takes teams many, many different attempts to get an LLM to do these tasks correctly and consistently.


The Ten Choke Points we should be addressing

Based on talking to teams working on legal help AI projects across many jurisdictions, I have pulled out 10 recurring choke points that slow or block development.

  • Confidentiality, PII Masking, and Privilege
  • Conflicts Checking
  • Income Verification and Eligibility Determination
  • Legal Logic and Expert Reasoning (The Non-Documentable)
  • Records Pulls and System Integration
  • Tech Compliance, Data Sovereignty, and Business Agreements
  • Issue and Problem Classification
  • Edge Cases and the Documentation Gap
  • The Practitioner Empowerment Gap
  • The Perfection Trap: Unrealistic Accuracy Expectations

Each one is a problem that many teams have encountered independently — and that we could dramatically reduce through shared infrastructure.

1. Confidentiality, PII Masking, and Privilege

Legal AI tools handle some of the most sensitive personal information imaginable: immigration status, domestic violence histories, income details, criminal records, health conditions. Every team building an AI tool must figure out how to handle PII — how to mask it during development, how to protect it in production, and how to ensure that attorney-client privilege is not inadvertently waived when data flows through third-party models or services.

All the private info people share (even if you ask them not to, or that lawyers need to gather to process a case — all becomes a huge hindrance to devleoping an effective solution.

Why PII and privacy obligations this stalls R&D:

  • No shared PII detection and masking libraries tuned for legal help contexts (names in court filings, SSNs in income forms, addresses in safety-sensitive cases). Teams are rebuilding this or trying to figure it out
  • Lack of clear guidance on when privilege attaches in AI-assisted interactions and what data flows can safely involve cloud-based models
  • Each team builds ad hoc masking scripts, often missing edge cases (e.g., names embedded in narrative text, addresses in exhibits)
  • Testing with realistic data is nearly impossible without robust de-identification, forcing teams to test with artificial scenarios that don’t reveal real failure modes

What would help teams protect data and reduce PII exposure:

  • A shared, open-source PII detection and masking toolkit purpose-built for legal help documents — court filings, intake forms, case notes — combining rule-based pipelines with trained NER models for legal-specific entity types
  • Cutting-edge privacy by design mechanisms should be the default, not an afterthought. This means architectures where sensitive data never leaves the secure perimeter in the first place: differential privacy for aggregate analytics, federated learning so models can improve without centralizing client data, synthetic data generation for realistic testing without real PII, and confidential computing or trusted execution environments for the most sensitive processing steps. On-device or on-premises processing for safety-critical intake flows can eliminate entire categories of cloud-based privacy risk
  • Model privilege and confidentiality guidance documents, developed with ethics experts and state bar associations, covering common AI deployment architectures — including when privilege attaches in AI-assisted interactions and what data flows are safe
  • PII data flow diagrams included in every reference architecture, so teams start with a compliant design rather than bolting on privacy later

2. Conflicts Checking

Legal aid organizations (and other firms) have strict ethical obligations to check for conflicts of interest before providing services. When an AI tool screens a potential client, gathers information, or provides guidance, it may create a conflict that prevents the organization from later representing that person’s adversary — or vice versa. This seemingly simple compliance requirement has deep implications for system architecture, data sharing, and multi-organization collaboration.

Why confidentiality checking stalls R&D:

  • AI intake tools often gather enough information to trigger conflict obligations before the organization realizes it
  • No shared technical patterns for conflict-aware intake architecture (when to check, what data to hold vs. discard, how to handle warm handoffs between organizations)
  • Multi-organization triage systems (e.g., statewide referral tools) face compound conflict risks that no single organization can resolve alone
  • Lack of conflict-checking APIs or integration patterns with case management systems (LegalServer, Legal Files, etc.)

What would help do conflict checking efficiently:

  • Reference architecture patterns for conflict-aware intake, including decision trees for when AI-gathered information triggers a conflict check
  • Direct API integration with case management systems — LegalServer, Legal Files, and others — so that AI tools can query the organization’s existing conflict database in real time before collecting detailed case information, rather than building parallel conflict-detection logic
  • Model data retention and handoff protocols for multi-organization triage platforms, specifying what information can be shared during warm handoffs and what must be discarded
  • A living resource documenting emerging case law and ethics opinions on AI-assisted intake and conflict obligations

3. Income Verification and Eligibility Determination

Most legal aid services are means-tested: to qualify, a person must demonstrate that their income falls below a certain threshold (often tied to Federal Poverty Guidelines).

Courts also need to do means-testing as well. When they give low-income people the option to waive fees to file eviciton answers or debt collection ones — they require a form explaining that person’s income and verifying the quality. Same when they consider whether to reduce a person’s traffic ticket fines based on ‘ability to pay’. Or when they consider waiving past court costs based on income or disability.

Income verification sounds like a simple data check, but in practice it is one of time-consuming, discouraging, and error-prone steps in legal help delivery. Income comes from many sources, is reported differently in different documents, and is calculated differently across programs. For a person to prove they qualify for a given service or policy, it feels like doing your taxes. Lots of complicated fields and proof required, lots of thinking and research to do to fill it in correctly.

Why this income/status verification stalls R&D:

  • Every organization builds its own income/eligibilty calculation logic, often with subtle bugs (e.g., annualizing hourly wages, handling irregular income, counting household vs. individual income)
  • Eligibility thresholds vary by program, court, funder, and jurisdiction — there is no single lookup table. Same thing for court’s fee waivers or ability to pay determinations
  • Document verification (pay stubs, tax returns, benefits letters) requires extraction from varied formats with no shared tooling
  • Calculating a given household or individual income level requires lots of many different fields and questions
  • AI tools that skip or simplify income checks risk enrolling ineligible clients (compliance failure) or turning away eligible ones (access failure)

What would help overcome this verification choke point:

  • Direct data connections to authoritative eligibility databases rather than rebuilding verification logic from scratch. In many states, systems of record already exist — CalSAWS/CalFresh in California, state SNAP and TANF databases, SSA benefit verification services, Medicaid enrollment systems. The technical challenge is about getting clean, permissioned query access to the systems that already hold the answer to eligibility (not having the person calculate tehir own eligibility). Mapping these authoritative data sources, building adapter patterns for querying them, and advocating for API access where it doesn’t yet exist would eliminate the single most error-prone step in legal aid intake
  • A shared eligibility rules engine for cases where direct database access isn’t available, encoding income thresholds, household size calculations, and program-specific variations as configurable, reusable logic
  • Document extraction templates for common income verification documents (pay stubs, W-2s, SSA letters, benefits statements) that AI tools can reuse across projects
  • Test suites with edge cases: irregular income, self-employment, mixed households, benefits cliff scenarios
  • Partnership with courts, LSC and IOLTA programs to maintain a canonical, machine-readable eligibility threshold database

4. Legal Logic and Expert Reasoning (The Non-Documentable)

This is the deepest and most consequential choke point, that takes up so much time of R&D teams. The knowledge that makes a legal aid attorney effective — the ability to synthesize facts, weigh risks, anticipate complications, judge credibility, and make strategic choices — is overwhelmingly tacit. It is not written down in any guide, form, or statute. It lives in the practiced judgment of experienced practitioners, and it varies by jurisdiction, judge, courthouse culture, and case type.

So much knowledge is not written down anywhere!

Why this unwritten lawyer knowledge stalls R&D:

  • AI tools built only on written legal content miss the most important layer: the strategic and practical reasoning that experienced attorneys apply
  • Extracting this knowledge requires structured interviews, scenario walkthroughs, and iterative testing with SMEs — all expensive and slow
  • Even when captured, expert reasoning is often conditional and probabilistic (“usually the judge will…”, “in my experience…”) which is hard to encode faithfully
  • Without this knowledge layer, AI tools give technically correct but practically useless advice — like following the recipe but never having tasted the food

What would help overcome this knowledge gap:

  • Structured knowledge elicitation protocols (interview guides, scenario-based walkthroughs, think-aloud methods) published as reusable toolkits so every project doesn’t reinvent the process
  • A practitioner knowledge contribution framework — structured ways for attorneys to annotate, correct, and enrich AI tool outputs that feed back into shared knowledge bases
  • Journey-aware content models (like the Basics / Process / Complications framework) as standard chunking approaches that preserve the practical reasoning layer rather than flattening expert knowledge into flat FAQ pairs
  • Funded “expert sprint” sessions where practitioners from multiple jurisdictions walk through scenarios together, generating reusable decision logic and edge case libraries
  • Evaluation rubrics that specifically test for practical reasoning quality, not just legal accuracy

5. Records Pulls and System Integration

This one is about getting the authoritative data out of key databases, to make accurate determinations, craft strategies, and fill in forms/draft documents. It’s also about getting new filings and documentation into the authoritative systems.

Many legal AI tasks require information that lives in external systems: court records, property records, benefits databases, criminal history repositories, vital records. Pulling this information programmatically is essential for automation — and almost always harder than expected.

Why this database access stalls R&D:

  • Most court and government record systems lack modern APIs; data access requires screen-scraping, SFTP drops, or manual lookup
  • Record formats vary dramatically across jurisdictions (docket entries, case indexes, property records), with no common schema
  • Access permissions, authentication requirements, and acceptable use policies differ by system and often require formal agreements that take months to execute
  • Real-time access is rarely possible, meaning AI tools must work with stale data and handle the resulting uncertainty

What would help to build these points of database access:

  • A field-wide map of record systems and access pathways for high-priority use cases (eviction records, court calendars, benefits verification) across pilot states, published as navigable guides so that every new project doesn’t have to rediscover who to call and what format to expect
  • Partnership with CourtStack and court technology vendors to advocate for standardized record APIs, starting with case status and calendar endpoints — the same pattern that transformed healthcare interoperability (FHIR) and could do the same for court systems
  • Adapter libraries for common record system patterns (e.g., Odyssey, Tyler Supervise, ICMS) that projects can reuse rather than building from scratch
  • Fallback architecture patterns for when API access is unavailable — including cached snapshots, polling strategies, and graceful degradation designs that handle stale data transparently

6. Tech Compliance, Data Sovereignty, and Business Agreements

Another big one is demonstrating privacy compliance, which can seem harder than it looks. Along with privacy rules, the solution should also follow other regulations that require ethical compliance.

Before a legal AI tool can go live, the deploying organization must navigate a thicket of compliance requirements: data processing agreements with cloud providers, BAAs for tools that touch health information, state-specific data residency rules, ADA accessibility requirements, terms of service for upstream AI model providers, and often funder-specific technology policies (e.g., like a philanthropy’s regulations on technology use with grant funds).

Why this privacy and other compliance stalls R&D:

  • Each organization negotiates these agreements independently, with limited legal and technical capacity to evaluate complex terms
  • Model provider terms of service change frequently and may include training-on-input clauses that conflict with confidentiality obligations
  • State-level data sovereignty requirements (especially for courts) are poorly documented and inconsistently applied
  • Accessibility compliance (WCAG 2.1 AA) is often treated as an afterthought rather than a design requirement, leading to costly retrofits
  • The result: months of legal review before any code ships

What would help groups navigate this compliance:

  • Template agreements (DPAs, BAAs, model provider addenda) pre-negotiated for common legal aid deployment patterns, so organizations can adopt rather than draft from scratch
  • A regulatory landscape tracker covering state-by-state AI and data requirements for legal services, maintained as a living resource
  • An accessibility compliance checklist and testing protocol specific to legal help AI interfaces
  • Pre-approved technology standards developed with LSC, IOLTA programs, and major funders that organizations can reference in compliance documentation
  • A procurement and compliance working group that pools expertise and shares reviewed vendor assessments, so that one organization’s diligence benefits everyone

7. Issue and Problem Classification

Legal problems rarely arrive pre-labeled. But many teams’ solutions rely on correct classification of a person’s scenario, to get it slotted and connected correctly. This is the classificaiton problem.

A person contacts a legal aid hotline and says, “My landlord is trying to kick me out and I haven’t been able to work because of my injury.” That single sentence may implicate housing (eviction defense), employment (wrongful termination or disability accommodation), public benefits (workers’ compensation, SSI/SSDI), and possibly immigration or family law depending on context. Correctly classifying the legal issues is foundational to triage, routing, and service delivery — and it is surprisingly hard to automate.

Why this classification problem stalls R&D:

  • Each organization keeps relying on their own internal taxonomy that are similar but just different enough that they have to be maintained and deployed one org at a time. They are not cross-walked to common taxonomies.
  • Existing taxonomies (like LIST) provide excellent coverage for some areas, but require human judgment to apply to real-world narratives that rarely map cleanly to a single code. There are no common or easily available tools to do LIST or other classificaiton.
  • Multi-issue cases are the norm, not the exception — but most AI systems are built to classify to a single primary issue
  • Classification accuracy drops sharply for uncommon issue types, non-English speakers, and cases involving intersecting legal domains
  • Without reliable classification, every downstream step — routing, content retrieval, eligibility determination — is compromised

What would help overcome the classification trouble:

  • Problem/taxonomy classifier that anyone can use to correctly spot and label the issues present in a person’s problem scenario, documents, narratives, and more.
  • More groups using the same LIST taxonomy and contributing to it, rather than creating their own.
  • Crosswalks between LIST and other taxonomies in active use (LSC problem codes, state-specific codes, NSLA categories) so that classification outputs are interoperable across organizations and systems. Even for those who want to use their own taxonomy, crosswalk it over.
  • Gold-standard classification test sets using LIST codes, with realistic multi-issue scenarios, non-English examples, and edge cases — maintained as a shared benchmark
  • Multi-label classification approaches (not single-label) as the default pattern in reference architectures
  • Iterative refinement of classification models incorporating real intake data (de-identified) to improve accuracy over time
  • Published classification accuracy benchmarks by issue area and language, so organizations can make informed decisions about where AI classification is reliable enough to deploy

8. Edge Cases and the Documentation Gap

Edge cases are where legal AI tools are most likely to fail and where failure matters most. A tenant who is both a victim of domestic violence and an undocumented immigrant facing eviction needs different guidance than a straightforward non-payment case. A person with a cognitive disability navigating a court self-help center needs a different interaction pattern than someone who is tech-savvy and literate. These edge cases are poorly documented, poorly tested, and poorly handled by current tools.

Why this lack of edge case documentation stalls R&D:

  • Edge cases are discovered in production, when the tool is already far down the develoment journey or maybe even in pilot— by the time they surface, real people may have been harmed.
  • There is no systematic mechanism for organizations to share the edge cases they discover with other teams building similar tools
  • Test suites focus on common scenarios because edge case data is scarce; this creates a false sense of confidence in model performance
  • The highest-stakes edge cases (safety, immigration consequences, mandatory reporting triggers) are also the most sensitive to document and share

What Edge Case documentation would help:

  • A shared, de-identified edge case library organized by issue area, with structured fields for the scenario, the failure mode, the correct handling, and the lessons learned
  • Edge case discovery protocols built into every reference architecture — structured ways for pilot teams to identify, document, and escalate unexpected scenarios
  • Safety-specific test suites for high-risk situations like mandatory reporting triggers, imminent harm indicators, and immigration red flags that every tool should pass before deployment
  • A cross-organization learning loop where teams regularly share anonymized edge case reports and collectively develop response patterns
  • Edge case coverage as a required dimension in all evaluation rubrics — a tool cannot score well without demonstrating handling of non-standard scenarios

9. The Practitioner Empowerment Gap

Another big choke point around legal help AI R&D is that we don’t have enough people working on it. If a lawyer, paralegal, or other team member has a great idea for transforming their workflow, they often feel like they have to wait for a tech/design team to help them build it. Where is the innovation team to help them? The legal team experts’ ideas stall out, while they wait.

The innovation teams are so oversubscribed, there is so much to do, teh ideas lose momentum and don’t move into serious develoment or testing.

The people who understand legal help delivery best — attorneys, paralegals, court clerks, legal aid managers, navigators — are overwhelmingly not the ones building the AI tools. They are waiting for technologists to build something, then reacting to it. This dynamic is backwards.

Legal experts hold the domain knowledge that determines whether a tool actually works, but they rarely feel empowered to drive the transformation themselves. They don’t see their daily work reflected in system-level workflow diagrams. They aren’t given tools or frameworks that let them participate in design, specification, or evaluation at a meaningful level. Instead, they are positioned as reviewers of someone else’s interpretation of their work — and by the time they see a prototype, the foundational assumptions are already locked in.

Why this expert practitioner gap stalls R&D:

  • Legal experts are treated as consultants to technology projects rather than co-designers or owners of the transformation
  • Workflow mapping and systems thinking are rarely part of legal training, so practitioners don’t see their expertise as relevant to technology design — even though it is the most critical input
  • Technology teams build from their own assumptions about how legal help works, encoding misunderstandings that practitioners could have caught in the first hour
  • The field loses the practitioners who are best positioned to drive change, because they don’t see a path from their current role to meaningful participation in AI development
  • Organizations wait for external vendors or grant-funded tech teams to “bring AI to them” rather than building internal capacity to shape and lead their own AI strategy

What would help more legal help teams get empowered with AI R&D:

  • Reference architectures and playbooks designed so that legal experts — not just technologists — can use them to specify, scope, and evaluate AI tools for their own contexts
  • Practitioner-facing workflow mapping tools and templates that let legal staff document their own processes in a format that directly feeds AI design (bridging the gap between “how I do my job” and “what the system should do”)
  • Legal practitioners embedded as co-leads and co-designers in build sprints, not as advisors brought in after architecture decisions are made. This could also be through taking experts off their regular teams for 6 weeks or 6 months to see a project through.
  • Training programs built around empowering legal professionals to lead AI projects, not just understand them
  • Low-code and no-code specification frameworks (e.g., structured scenario templates, decision tree builders) that let practitioners author the logic without writing code
  • A narrative shift from “technologists building for legal aid” to “legal experts building with technology support” — and funding structures that match

10. The Perfection Trap: Unrealistic Accuracy Expectations

This last one is more further down the R&D road — where an idea has been built out. It might even have gone through many, many rounds of refinement. But then the choke point emerges: leaders say, it cannot be put into pilot until it is (nearly) perfect.

(Even though this is not the standard for human legal teams… )

Projects stall or get taken ofline because stakeholders demand sky-high accuracy across every dimension before greenlighting a pilot. A triage tool must classify every issue correctly. A document assistant must never produce a flawed draft. An intake chatbot must handle every possible scenario without error.

Too-high standards can mean things can never go to pilot.

This all-or-nothing standard sounds responsible, but it actually prevents responsible deployment by conflating genuinely high-risk functions with lower-risk ones that could safely launch with good-enough performance and human oversight.

It ignores what the current baseline is. Often this baseline is no services at all — sending people to DIY or use free, non-specialized AI tools or Reddit or informal advice to solve their problem. These are likely to have much, much lower performance scores. Or the baseline is human legal teams, which likely will have higher quality performance scores — but not 99% accuracy and safety.

Why this Perfection Gap stalls R&D:

  • Decision-makers apply a single accuracy bar across all functions, rather than differentiating by actual risk level — a wrong answer about courthouse hours is not the same as a wrong answer about a filing deadline
  • The comparison baseline is implicit perfection, not the current reality — where people routinely get no help at all, or get wrong information from overwhelmed staff, or miss deadlines because they couldn’t reach anyone
  • Fear of liability and bad press creates institutional paralysis: no one wants to be the organization that deployed an AI tool that gave wrong legal advice, even if the alternative is that thousands of people get no advice
  • Evaluation frameworks rarely distinguish between “high-risk, must-be-right” functions (safety screening, mandatory reporting triggers, deadline calculations) and “lower-risk, value-even-if-imperfect” functions (general orientation, resource finding, form field explanations)
  • Pilot proposals die in committee because reviewers focus on the 5% failure cases rather than the 95% improvement over the status quo of no help

What would help teams responsibly deal with this perfection gap:

  • A risk-tiered evaluation framework that categorizes AI functions by actual consequence of error — distinguishing safety-critical functions (where near-perfect accuracy is genuinely required) from informational and navigational functions (where good-enough performance with human backup is a massive improvement over no service)
  • “Compared to what?” baselines for common legal help scenarios: what is the current accuracy, completeness, and timeliness of help that people actually receive today? Make the real comparison explicit so that decision-makers evaluate AI tools against the actual alternative, not against an imagined perfect system
  • Graduated deployment models: start with low-risk functions (information, orientation, resource finding), demonstrate safety and value, then expand to higher-risk functions as confidence and evidence accumulate
  • Model governance frameworks that show how to combine AI assistance with human review at calibrated levels — heavy oversight for high-risk functions, lighter touch for lower-risk ones — so that organizations can deploy responsibly without requiring perfection
  • Risk-tier mapping as a standard step in every implementation playbook, so that teams and their stakeholders explicitly agree on which functions require what level of accuracy before development begins
  • Advocacy with funders and regulators for evidence-based accuracy standards rather than zero-tolerance policies, using data from pilots to demonstrate that imperfect AI plus human oversight outperforms no AI at all
  • Insurance products that can make consumers whole again if there is a problem — but that still allow a team to move forward if their solution meets the agreed-upon standard.
  • Evaluation teams in well-resourced, public interest organizations who can maintain more realistic standards and also help teams carry out reliable, right-sized performance evaluations to have a more accurate knowledge of how their tool is performing.

The Shared Infrastructure Strategy to Overcome these 10 choke points

Each of these choke points, encountered individually, is enough to stall or kill a project. Encountered together — which is what happens to every team — they explain why legal AI R&D feels so much harder than it should be.

The Legal Help Commons is designed specifically to address this pattern. Rather than letting every team independently solve the same problems, the Commons creates shared resources that any team can draw on — organized across three pillars:

JusticeBench: Project inventory, benchmarks, test suites, classification datasets, edge case libraries, regulatory landscape tracker

Implementation Library: Reference architectures, PII and privacy toolkits, eligibility engines, template agreements, adapter libraries

Cohorts & Community: Power Groups, expert sprints, edge case sharing loops, working groups, cross-org learning

The key insight is that these choke points are not unique to any one project. They are field-level problems that require field-level solutions. No single organization should have to negotiate its own data privacy agreements from scratch, build its own PII masking pipeline, map its own path to court record APIs, or discover its own edge cases through trial and error. Shared infrastructure makes these solved problems rather than ongoing obstacles.

What Changes If We Deal With these R&D Blocks

If the field successfully builds shared infrastructure around these ten choke points, the impact compounds:

  • Project timelines compress from months to weeks as teams start from working reference architectures instead of blank pages
  • Quality improves as edge case libraries and evaluation rubrics capture hard-won lessons across organizations
  • Costs drop as privacy toolkits, eligibility engines, and template agreements eliminate duplicated work
  • Equity improves as smaller organizations gain access to the same infrastructure as well-resourced ones
  • Trust grows as the field demonstrates shared standards for safety, accuracy, and accountability
  • Practitioners lead as workflow mapping tools and low-code frameworks give legal experts direct authorship over AI system design

What We Need From the Field

Shared infrastructure requires shared participation. To build these resources, we need things, especially from those who have been on R&D journeys or supporting them.

  • Practitioner time: Attorneys and legal aid staff willing to participate in expert sprints, annotate AI outputs, and share edge cases
  • Pilot partners: Organizations willing to test reference architectures in real settings and report back on what works and what doesn’t
  • Technical contributors: Developers, data scientists, and researchers who can build and maintain open-source tooling
  • Funder alignment: Grants and contracts that support shared infrastructure, not just point-to-point tool development
  • Honest reporting: Teams willing to share failures and difficulties, not just successes — because the failures are where the learning lives

Legal help AI R&D gets stuck not because the technology is inadequate, but because every team hits the same structural barriers alone. The Legal Help Commons is building the shared infrastructure to clear those barriers once — so that the field’s energy goes into innovation, not rework.

Categories
Class Blog Design Research

3 Kinds of Access to Justice Conflicts

(And the Different Ways to Design for Them)

by Margaret Hagan

In the access to justice world, we often talk about “the justice gap” as if it’s one massive, monolithic challenge. But if we want to truly serve the public, we need to be more precise. People encounter different kinds of legal problems, with different stakes, emotional dynamics, and system barriers. And those differences matter.

At the Legal Design Lab, we find it helpful to divide the access to justice landscape into three distinct types of problems. Each has its own logic — and each requires different approaches to research, design, technology, and intervention.

3 Types of Conflicts that we talk about when we talk about Access to Justice

1. David vs. Goliath Conflicts

This is the classic imbalance. An individual — low on time, legal knowledge, money, or support — faces off against a repeat player: a bank, a corporate landlord, a debt collector, or a government agency.

These Goliaths have teams of lawyers, streamlined filing systems, institutional knowledge, predictive data, and now increasingly, AI-powered legal automation and strategies. They can file thousands of cases a month — many of which go uncontested because people don’t understand the process, can’t afford help, or assume there’s no point trying.

This is the world of:

  • Eviction lawsuits from corporate landlords
  • Mass debt collection actions
  • Robo-filed claims, often incorrect but rarely challenged

The problem isn’t just unfairness — it’s non-participation. Most “Davids” default. They don’t get their day in court. And as AI makes robo-filing even faster and cheaper, we can expect the imbalance in knowledge, strategy, and participation may grow worse.

What Goliath vs. David Conflicts need

Designing for this space means understanding the imbalance and structuring tools to restore procedural fairness. That might mean:

  • Tools that help people respond before defaulting. These could be pre-filing defense tools that detect illegal filings or notice issues. It could also be tools that prepare people to negotiate from a stronger position — or empower them to respond before defaulting.
  • Systems that detect and challenge low-quality filings. It could also involve systems that flag repeat abusive behavior from institutional actors.
  • Interfaces that simplify legal documents into plain language. Simplified, visual tools to help people understand their rights and the process quickly.
  • Research into procedural justice and scalable human-AI support models

2. Person vs. Person Conflicts

This second type of case is different. Here, both parties are individuals, and neither has a lawyer.

In this world, both sides are unrepresented and lack institutional or procedural knowledge. There’s real conflict — often with emotional, financial, or relational stakes — but neither party knows how to navigate the system.

Think about emotionally charged, high-stakes cases of everyday life:

  • Family law disputes (custody, divorce, child support)
  • Mom-and-pop landlord-tenant disagreements
  • Small business vs. customer conflicts
  • Neighbor disputes and small claims lawsuits

Both people are often confused. They don’t know which forms to use, how to prepare for court, how to present evidence, or what will persuade a judge. They’re frustrated, emotional, and worried about losing something precious — time with their child, their home, their reputation. The conflict is real and felt deeply, but both sides are likely confused about the legal process.

Often, these conflicts escalate unnecessarily — not because the people are bad, but because the system offers them no support in finding resolution. And with the rise of generative AI, we must be cautious: if each person gets an AI assistant that just encourages them to “win” and “fight harder,” we could see a wave of escalation, polarization, and breakdowns in courtrooms and relationships.

We have to design for a future legal system that might, with AI usage increasing, become more adversarial, less just, and harder to resolve.

What Person Vs. Person Justice Conflicts Need

In person vs. person conflicts, the goal should be to get to mutual resolutions that avoid protracted ‘high’ conflict. The designs needed are about understanding and navigation, but also about de-escalation, emotional intelligence, and procedural scaffolding.

  • Tools that promote resolution and de-escalation, not just empowerment. They can ideally support shared understanding and finding a solution that can work for both parties.
  • Shared interfaces that help both parties prepare for court fairly. Technology can help parties prepare for court, but also explore off-ramps like mediation.
  • Mediation-oriented AI prompts and conflict-resolution scaffolding. New tools could have narrative builders that let people explain their story or make requests without hostility. AI prompts and assistants could calibrate to reduce conflict, not intensify it.
  • Design research that prioritizes relational harm and trauma awareness.

This is not just a legal problem. It’s a human problem — about communication, trust, and fairness. Interventions here also need to think about parties that are not directly involved in the conflict (like the children in a family law dispute between separating spouses).

3. Person vs. Bureaucracy

Finally, we have a third kind of justice issue — one that’s not so adversarial. Here, a person is simply trying to navigate a complex system to claim a right or access a service.

These kinds of conflicts might be:

  • Applying for public benefits, or appealing a denial
  • Dealing with a traffic ticket
  • Restoring a suspended driver’s license
  • Paying off fines or clearing a record
  • Filing taxes or appealing a tax decision
  • Correcting an error on a government file
  • Getting work authorization or housing assistance

There’s no opposing party. Just forms, deadlines, portals, and rules that seem designed to trip you up. People fall through the cracks because they don’t know what to do, can’t track all the requirements, or don’t have the documents ready. It’s not a courtroom battle. It’s a maze.

Here many of the people caught in these systems do have rights and options. They just don’t know it. Or they can’t get through all the procedural hoops to claim them. It’s a quiet form of injustice — made worse by fragmented service systems and hard-to-reach agencies.

What Person vs. Bureaucracy Conflicts Need

For people vs. bureaucracy conflicts, the key word is navigation. People need supportive, clarifying tools that coach and guide them through the process — and that might also make the process simpler to begin with.

  • Seamless navigation tools that walk people through every step. These could be digital co-pilots that walk people through complex government workflows, and keep them knowledgeable and encouraged at each step.
  • Clear eligibility screeners and document checklists. These could be intake simplification tools that flag whether the person is in the right place, and sets expectations about what forms someone needs and when.
  • Text-based reminders and deadline alerts, to keep people on top of complicated and lengthy processes. These procedural coaches can keep people from ending up in endless continuances or falling off the process altogether. Personal timelines and checklists can track each step and provide nudges.
  • Privacy-respecting data sharing so users don’t have to “start over” every time. This could mean administrative systems that have document collection & data verication systems that gather and store proofs (income, ID, residence) that people need to supply over and again. It could also mean bringing their choices and details among trusted systems, so they don’t need to fill in another form.

This space is ripe for good technology. But it also needs regulatory design and institutional tech improvements, so that systems become easier to plug into — and easier to fix. Aside from user-facing designs, we also need to work on standardizing forms, moving from form-dependencies to structured data, and improve the tech operations of the systems.

Why These Distinctions Matter

These three types of justice problems are different in form, in emotional tone, and in what people need to succeed. That means we need to study them differently, run stakeholder sessions differently, evaluate them with slightly different metrics, and employ different design patterns and principles.

Each of these problem types requires a different kind of solution and ideal outcome.

  • In David vs. Goliath, we need defense, protection, and fairness. We need to help reduce the massive imbalance in knowledge, capacity, and relationships, and ensure everyone can have their fair day in court.
  • In Person vs. Person, we need resolution, dignity, and de-escalation. We need to help people focus on mutually agreeable, sustainable resolutions to their problems with each other.
  • In Person vs. Bureaucracy, we need clarity, speed, and guided action. We must aim for seamless, navigable, efficient systems.

Each type of problem requires different work by researchers, designers, an policymakers. These include different kinds of:

  • User research methods, and ways to bring stakeholders together for collaborative design sessions
  • Product and service designs, and the patterns of tools, interfaces, and messages that will engage and serve users in this conflict.
  • Evaluation criteria, about what success looks like
  • AI safety guidelines, about how to prevent bias, capture, inaccuracies, and other possible harms. We can expect these 3 different conflicts changing as more AI usage appears among litigants, lawyers, and court systems.

If we blur these lines, we risk building one-size-fits-none tools.

How might the coming wave of AI in the legal system affect these 3 different kinds of Access to Justice problems?

Toward Smarter Justice Innovation

At the Legal Design Lab, we believe this three-type framework can help researchers, funders, courts, and technologists build smarter interventions — and avoid repeating old mistakes.

We can still learn across boundaries. For example:

  • How conflict resolution tools from family law might help in small business disputes
  • How navigational tools in benefits access could simplify court prep
  • How due process protections in eviction can inform other administrative hearings

But we also need to be honest: not every justice problem is built the same. And not every innovation should look the same.

By naming and studying these three zones of access to justice problems, we can better target our interventions, avoid unintended harm, and build systems that actually serve the people who need them most.

Categories
AI + Access to Justice Class Blog Current Projects Design Research

Interviewing Legal Experts on the Quality of AI Answers

This month, our team commenced interviews with landlord-tenant subject matter experts, including court help staff, legal aid attorneys, and hotline operators. These experts are comparing and rating various AI responses to commonly asked landlord-tenant questions that individuals may get when they go online to find help.

Learned Hands Battle Mode

Our team has developed a new ‘Battle Mode’ of our rating/classification platform Learned Hands. In a Battle Mode game on Learned Hands, experts compare two distinct AI answers to the same user’s query and determine which one is superior. Additionally, we have the experts speak aloud as they are playing, asking that they articulate their reasoning. This allows us to gain insights into why a particular response is deemed good or bad, helpful or harmful.

Our group will be publishing a report that evaluates the performance of various AI models in answering everyday landlord-tenant questions. Our goal is to establish a standardized approach for auditing and benchmarking AI’s evolving ability to address people’s legal inquiries. This standardized approach will be applicable to major AI platforms, as well as local chatbots and tools developed by individual groups and startups. By doing so, we hope to refine our methods for conducting audits and benchmarks, ensuring that we can accurately assess AI’s capabilities in answering people’s legal questions.

Instead of speculating about potential pitfalls, we aim to hear directly from on-the-ground experts about how these AI answers might help or harm a tenant who has gone onto the Internet to problem-solve. This means regular, qualitative sessions with housing attorneys and service providers, to have them closely review what AI is telling people when asked for information on a landlord-tenant problem. These experts have real-world experience in how people use (or don’t) the information they get online, from friends, or from other experts — and how it plays out for their benefit or their detriment. 

We also believe that regular review by experts can help us spot concerning trends as early as possible. AI answers might change in the coming months & years. We want to keep an eye on the evolving trends in how large tech companies’ AI platforms respond to people’s legal help problem queries, and have front-line experts flag where there might be a big harm or benefit that has policy consequences.

Stay tuned for the results of our expert-led rating games and feedback sessions!

If you are a legal expert in landlord-tenant law, please sign up to be one of our expert interviewees below.

https://airtable.com/embed/appMxYCJsZZuScuTN/pago0ZNPguYKo46X8/form

Categories
Class Blog Design Research

People’s experiences with eviction prevention

From a team in the Justice By Design: Eviction Class, 2022.

I: Overview of Activities 

  1. Our policy lab interviewed sixteen tenants, navigators, and landlords across the country, learning from their experiences and hearing their ideas. We asked general questions about their experiences with eviction, their experiences with seeking out help, and their ideas for change.
  2. We synthesized interviews by creating personas, user journeys, and visual representations of salient moments gleaned from the interviews. 
  3. Finally, we shared common findings to capture pervasive issues and suggest potential reforms.

II: Problems identified based off interviews with tenants

Informal evictions

Many tenants described falling behind on rent and feeling that they had to move out, even before they had been served with any formal eviction documents. Landlords often don’t follow proper notice procedures for eviction, telling their tenants to pay what they owe or start planning to move out. Considering a pervasive fear of the legal system, as discussed below, it is difficult to imagine tenants being empowered to hold their landlords accountable for breaking the law.

Especially for tenants behind on rent, many lack a feeling of agency to look for resources. They assume that because they are behind on rent, they will not have any recourse to resist displacement. 

The fact that many evictions occur informally presents unique challenges for policy implementation. Eviction reforms centered around courts are common, but legal and court reforms will not affect the experiences of those evicted extralegally. These experiences highlight the need for empowering interventions that occur before the eviction experience; tenants need to know of their rights and resources before a housing scare occurs. Any intervention that does not reach clients pre-eviction may be too late. 

Tenant Story: John 
John was informally evicted from his home in San Francisco. Due to local tenant protections, John very likely could have received legal aid—if he knew where to look. But John was evicted informally; he was told to vacate by his landlord, without being provided any proper legal notice. 
John was recovering from injuries he sustained during an accident, so he did not feel that he had the ability to look for any financial or legal resources. Unable to make up the rent he owed, John and his family had to move out. They were able to live temporarily with friends and family until they found a new place to live.
John’s story is a prime example of how even when robust legal or financial resources exist, these resources provide no recourse to informally evicted tenants who lack awareness of their options. Ensuring that tenants are informed of their rights and resources before crisis occurs is critical.

Complex eviction notices

Receiving a Notice to Quit or an eviction summons could be a potential point of intervention; these notices ideally would tell tenants: (1) why they are receiving the notice; (2) how they can respond to the notice; and (3) resources they can seek if they need assistance.

Formal eviction notices are far from this ideal. To most tenants, they appear to be warnings that they need to leave, rather than indicators that they have options as part of an ongoing process.

Notices tend to be written in confusing English, and are often not served in foreign languages. Some states have attempted to simplify eviction notices. In Massachusetts, for example, an eviction summons gives the tenant a court date. Getting to court can be difficult, but being given a date and location seems easier to comply with than the requirement of making an official legal filing. Greater Boston Legal Services has a free online service that prompts tenants with questions to answer in plain English, then creates a form that tenants can use in Housing Court to help them defend themselves. Instead of forcing people to file an official Answer, giving tenants the option to fill out an online form where they can explain their situation could be much more tenant-friendly.

We also learned that the landlord-tenant relationship is becoming increasingly bureaucratized. Many tenants live not under mom-and-pop landlords, but rather under large, impersonal property management companies. These companies can churn out Notices to Quit summarily after tenants fall behind on rent—even if they fall behind for just a few days. Tenants feel slighted by this impersonal process; they are asked to vacate without anyone checking in on them or trying to work things out informally.

Property management companies provide an interesting wrinkle in how we think about policy implementation. Because their systems are bureaucratized (and may be less personally antagonistic toward non-paying tenants), it may be simpler for them to implement positive changes—like attaching an NAACP Navigator flier whenever they serve a Notice to Quit.

Tenant Story: Linda 
Linda works as a case manager for people affected by COVID, and her work includes assisting people through eviction scares. She is completely knowledgeable of all the resources available to tenants in her home state of Colorado. Because she lives under an impersonal property management company, she received a Notice to Quit after falling behind on rent for three days. 
Having lived in her home for some time without any issues, Linda was shocked and offended that the company would try to kick her out after being behind for just three days. And even though she knows the law, she reported that her ability to comprehend her rights was compromised when she received her notice—she started to second-guess her own knowledge. 
Linda acknowledges that if she did not have her specialized background knowledge, the notice would likely have prompted her to leave.

Fear of court and court inaccessibility

Most tenants we interviewed never really pictured their eviction scare as a legal issue. For most who sought recourse, their emphasis was on finding enough money to pay. Some tenants expressed uncertainty about what, if any, legal resources were available to them. Certain tenants expressed that they did not qualify for legal aid, yet they could not independently afford legal assistance. 

Beyond the problem of access to legal advice, many tenants expressed broad skepticism about court. There is a shared understanding that court is a protracted, exhausting endeavor. Having to balance that experience with a family, a job, and other obligations is challenging, and sometimes impossible. For some, going to court does not feel worth the risk of losing time for their other commitments, potentially having the black mark of a formal eviction on their record, exposing their children to a courthouse, or going against their landlord—who they identify as having more power within the system. 

Any interventions that focus on the legal process of eviction must consider the fact that many tenants are evicted informally, and that even tenants with the opportunity to go to court choose to avoid the process of legal resistance. If interventions are designed to make court more tenant-friendly and more feasible to navigate, these changes need to be communicated to tenants to change a widespread negative perception of the legal system.

Tenant Story: Linda 
As discussed above, Linda works with people being evicted, so she is very aware of tenant resources and legal rights. When she faced her own eviction scare, however, she did not see the court as a viable option, and she instead opted for finding financial assistance. Certainly, going to court could yield a positive result, but the prospect of being formally evicted and having that on her permanent record was too risky. The fact that even someone as knowledgeable as Linda was scared of the courts is highly telling. 

Fear of “fighting,” desire for help

Related to the fear of court, tenants generally had overall apprehension at the thought of “fighting for their rights” or resisting. Due to the high stress of eviction, as well as the numerous obligations many tenants have to balance, the notion of resisting doesn’t always seem feasible or attractive. Most tenants focused not on resisting, but rather on getting some assistance and moving on with their lives.

Many eviction prevention policies place a heavy emphasis on lawyering, and encouraging tenants to resist through the various legal defenses they can raise. But to better meet tenants’ needs and desires, non-legal help (like the Navigators) may be a preferable intervention. Several tenants sought out rental assistance, but not legal assistance, suggesting that tenants may disfavor interventions that are seen as overly combative. There was also a widespread consensus that rental assistance was more accessible than legal services. Because legal interventions seem to be disfavored, policy that focuses on strengthening the legal backbone of eviction defense may fail to affect tenants who are simply seeking to move on as soon as possible and reach a place of stability. A good area for further inquiry would be asking tenants how they feel about lawyers generally as a resource. Would they be comfortable reaching out to a lawyer, or do they feel more comfortable reaching out to non-lawyer advocates?

One organization that focuses on prevention, rather than resistance, is HomeStart in Boston. HomeStart’s first line of defense in eviction prevention is a rental assistance payment program that seeks to help tenants halt the eviction process and pay back rent. HomeStart also has non-lawyer advocates who accompany clients to Housing Court, where they assist in negotiating feasible payment plans with landlords. HomeStart’s focus on holistic services and stability, rather than legal defense, may feel more accessible and comforting to tenants. 

Tenant Story: Ken 
Ken fell behind on rent and was served with an eviction notice after failing to resolve the issue informally with his landlord. Ken decided not to seek out legal aid or resist the eviction. He figured that the legal process would be too expensive. Plus, because he was behind on rent, he believed that he had no chance of asserting a legal defense. 
Ken was more comfortable reaching out to Southwest Behavioral and Health Services, where he was placed with a caseworker. Ken had a great experience seeking out holistic services. He was able to secure financial assistance to find a new home, and his caseworker also assisted him in filling out housing assistance applications. Ken now has Section 8 housing.

High stress

Several tenants communicated that they might have the ability to search for resources if the housing problems were happening to someone else, but that their ability to problem-solve was significantly clouded by their high levels of stress. Tenants have to balance family obligations, work, health, etc., and the emotional turmoil of housing insecurity means that it is often not feasible to seek out proper channels of assistance under these circumstances.

The reality of eviction is that even the most resourceful of tenants are often unable to figure out where to go to get help. Even if tenants know their rights, it may be asking too much for tenants undergoing this traumatizing process to resist. Perhaps interventions should therefore be centered around providing tenants the assistance of a third party, like a Navigator, who can take on the burden of finding resources. In other words, interventions that focus solely on empowerment and self-advocacy may fall short in these situations of heightened vulnerability.

General difficulty in securing resources

  Many tenants had frustrations with the process of attempting to secure resources. One tenant, Darlene, actually sought legal aid, but the offices she contacted were unresponsive due to overwhelming demand. Darlene became frustrated, and ultimately stopped trying to seek out legal aid when the stress of her impending eviction became overwhelming. Another tenant, Linda, was frustrated by the ERAP process. Her ERAP payment would take months to process, but she had very little time to pay the rent she owed. Linda ended up having to borrow from friends and family to stay in her home. Multiple tenants expressed a desire for an easy-to-access, uniform service for rental assistance.

A desire—but no outlet—to help

One of the most unfortunate ironies of eviction is that it is such a widely shared experience in some communities, yet the experience of being evicted is completely isolating. Many tenants who have experienced an eviction scare gain practical knowledge about best practices, but that knowledge is lost if not shared with others. 

Several tenants expressed gratitude that they were able to share their eviction stories, and were hopeful that the information they relayed would help others in similar situations. A surprising number of tenants showed an interest in becoming more formally involved in eviction prevention and attending events to share their experiences. Being evicted is a disempowering experience, and we heard tenants express that talking about their experiences was helpful. People seemed to appreciate having their voices heard, even if just for a brief interview. Eviction is a community problem, not an individual problem, so interventions should seek to integrate larger communities.

Tenant Story: Jen
Jen experienced manipulation and invasions of privacy when she had unofficial housing contracts. After being in two situations in which she was taken advantage of by landlords, she now feels empowered to speak up for others in the Vietnamese community. She knows many people are facing the same issues, and she wants to use her voice to stand up for her community.

III: Experience-Centric Solutions

Key Takeaways

Based on the conversations we had with tenants across the country, we found three key takeaways from the eviction process that are integral to any user-centered, experientially-motivated solutions: 

  1. Communication is key. For each tenant that we spoke to, communication, primarily between tenants and landlords, though also with families, employers, court employees, judges, government officials, and more, seemed to fail. The tight timelines of evictions can jam already busy communications lines, and even a day of unresponsiveness or a misunderstood court order can be the difference between a family staying in their home with their back rent paid, or living in temporary housing while struggling to find a new home. Facilitating clear communication throughout the eviction process will be key to ensuring fair, mutually beneficial outcomes.
  2. Isolation is disastrous. Almost each conversation that we conducted with evicted tenants revealed the overwhelming sense of isolation that endured throughout their eviction processes. With no one to turn to, tenants were consistently forced to adopt short-term, fight-or-flight thinking to best cope with the situation at hand. This often meant accepting unlawful evictions, or not knowing who to call to access the legal aid they were eligible for. When tenants have no support through the eviction process, they must consistently make decisions out of necessity. Supported, connected tenants, on the other hand, are much more likely to fight for their rights and reach mutually beneficial solutions.
  3. Awareness is lacking. Tenants are nearly universally lost when they receive an eviction notice or are made aware of an informal eviction process. Up to the point of eviction, they have received no education on how to manage an eviction process or their rights as a tenant. Generally, once the eviction process has begun, eviction education is almost useless—dealing with a current landlord, in addition to working to find a suitable new home, is stressful enough. Even in cities with robust tenant services and resources, like San Francisco, tenants still do not know who to reach out to when they are served with an eviction notice, and are thus not able to make use of the available services. Tenants must be informed enough to know where to turn, even if this is just knowing an urgent, non-emergency number, like 311.

Key Opportunities

Inspired by current policy solutions and pilots across the US, we used these key takeaways from tenant interviews to determine three potential opportunities for intervention in the current eviction landscape: 

Mandatory Mediation

Currently, almost all jurisdictions see eviction cases go straight to the courtroom. Tenants often choose to forgo their right to a trial out of intimidation. With mandated mediation, tenants have the opportunity to meet the landlord on a more even playing field, where mutual benefit is incentivized for both parties, in addition to offering a better opportunity to maintain the tenant-landlord relationship. Courts benefit, too, from reduced caseloads. This program has worked well during the pandemic in Philadelphia, where the city’s Eviction Diversion program has mandated that landlords go to mediation with their tenants before they are able to evict them. Philadelphia is unique, though, and many municipal and state jurisdictions face political opposition to any measures perceived to be biased toward renters or more costly than conventional courts. The program also fails to address informal evictions. While not a cure-all, and while an eviction notice mandating mediation remains frightening for many, we believe this could be an important step toward empowering both landlords and tenants to achieve an agreeable, workable solution that cuts costs and effort for all involved.

Navigator Programs

Given the discouraging prevalence of isolation during the eviction process, the potential to empower tenants to find their best solution through support and companionship is very important to experience-centric innovation in the eviction landscape. With housing navigator programs, like the NAACP pilot program in Richland County, SC, tenants at any stage of the eviction process can be connected with a community member who has been trained to understand the local eviction landscape and can educate tenants on their options and the available resources. This engages the local community on the issue of eviction, and provides both support and a know-your-rights knowledge base for tenants. Still, this comes with challenges: navigator recruitment and training, maintaining the boundary between advice and UPL, and the organizational overhead. Even when those are addressed, if tenants in need don’t know about the program, it can also be yet another helpful resource that goes unused. Nonetheless, when executed correctly, navigator programs have the potential to guide isolated and uninformed tenants to their best interest outcomes.

Renter Education and Simplified Notices

Most importantly, in our conversations with tenants, we found that eviction is nearly always an emergency. Even when renters expect recourse for nonpayment of rent, or were threatened by their landlord in the past, an eviction is always a moment of stress that no one feels prepared for. The opportunity here is obvious: What if eviction were something that every renter was prepared for? Or, what if every tenant at least knew one website to visit or number to call in case of urgent eviction needs? This is the case in Milwaukee, where the Rent For Success Program has worked hard to ensure that every tenant in the city has access to basic information and education to enable successful renting, beneficial to both tenants and landlords. While this solution may meet the most needs, and serves a clear function to better enable the earlier two, it too has challenges. How does one implement such a program? Is it mandatory for all municipal renters? Despite these questions, education is an exciting opportunity for individual municipalities to develop unique, local programs that can iterate, evolve, and grow to have tangible impacts on both landlords and tenants.

Categories
Class Blog Design Research Project updates

Observing a county court for language access

Initial Observations at the Santa Clara Family Justice Center (Week 2)
By Sahil Chopra

During our second week of the course, we paid our first visit to the Santa Clara Family Justice Center in order to observe, explore, and immerse ourselves in the court experience. Our day at court was structured around exploring the self-help facilities before branching out into smaller, more intimate portions of the courthouse in smaller groups. My team drove down to the court and arrived at around 8:30 am, just as the self-help waiting room started to fill up. We jotted down a few stray observations before convening with the rest of our class in the lobby at 9:00 am, where our instructors Margaret and Jonty handed out a few Design Review pamphlets for our day at court, wherein we continued to write down our observations and thoughts.

Here are the highlights from our first trip to court. Next week, we shall pool our individual observations and insights, as we brainstorm what potential problems and solutions might be.

Self-Help Desk

Definition:

Many users do not have access to a lawyer, so the court provide a self-help desk, where individuals wait in a queue until court staff call up their ticket number and can help them address their problem — whether that be information about the filing process or guidance as to which forms must be filled out in order to proceed with their case. While the self-help desk provides an invaluable service, it is often understaffed. As a result, court users often lineup outside the Family Court around 7:00 am, though the center does not open till 8:30 am and does not start processing tickets until about 9:00 am. When it comes to language access, there is not much the self-help desk can provide on its limited budget. If one does not speak English, he/she/they must bring along a translator, a legal adult in the state of California, i.e. 18 years or older, who is preferably a relative. If they come without a translator, they will ultimately be turned away.

Highlights:

The self-help waiting room feels like a hybrid of the DMV and a doctor’s office. Everyone sits side-by-side, but in their own little-world. Entering the room, there are black chairs lining the perimeter of the room, except for the left-hand-wall, where there is a wall full of assorted forms. While it seemed very well organized, i.e. color-coded, accessible, etc., there were very few people who approached the wall to pick up flyers. Perhaps, the singular placement of all essential forms seemed overwhelming?

Sitting in the crowd, it was easy to spot parents who had brought their teenagers to help them with their paperwork. In hushed voices, I saw a sixteen year boy reading over an assortment of forms, quickly translating them to their mom. Translation services would help decongest the overflowing waiting room, by limiting the number of family members that would need to be brought along. Additionally, it would be beneficial for both the kids and the parents, if the children did not have to take time off school.

Workshop

Definition:

Throughout the week, there are several workshops that the self-help desk hosts, wherein the process for filing a specific motion is discussed and then assistance is provided with form-filling. It just so happened that our-visit coincided with a divorce workshop.

After spending some time in the self-help room a few of us decided to observe the workshop.

Highlights:

While we were sitting in the self-help room, one of the court staff came out and announced who made it into the workshop and who did not. It seemed a bit impersonal and harsh to be called out by name, especially when everyone knows the issue associated with your use of the court. But maybe, that helps normalize the act of getting help?

The informational portion of the workshop consists of a 50 minute, screen-capture powerpoint presentation and narration. It was interesting that there were more spots for the video portion of the workshop than the 1:1 assistance portion of the workshop, even though the latter part feels more important towards the goal of filing a motion. This discrepancy between max capacity and serviceable capacity highlights the need for more staff.

The PowerPoint video described the technical legal terminology and processes surrounding divorce. While informative, the video didn’t seem to be helpful. Within the room, one couple talked over the video — trying to fill out their paperwork, as the video played. Most of the other viewers seemed to pay attention for the first five minutes before sliding into their chairs and waiting out the remainder of the video’s runtime.

The first problem with the video is that it is entirely in English. If you don’t speak English well, you’ve just wasted 50-minutes that could have been spent getting help.

The second problem with the video is that it is too long and lacked participant engagement. It’s important to be precise and informative, especially when dealing with legal matters; but the video consisted of a powerpoint and a voiceover. There was no color and few pictures. Furthermore, it did not actually help with the process of filling out the forms. Without interactivity, the video failed to provide actionable instructions — thus failing its purpose of providing help to individuals who needed assistance in filing for divorce.

The third problem with the video is that it is unaccessible. It cannot be accessed outside the workshop, and even within the workshop it cannot be paused, rewinded, etc. Thus, it fails it’s purpose of being a 1-stop-reference for all things divorce-related. Additionally, the video was poorly constructed in that a lot of the important facts were spoken but never transcribed on the slides themselves, even though the slides themselves were full of text.

Possible Language Access/Self-Help Solutions

After sitting through the workshop, I think there is a lot low hanging fruit here, i.e. small changes that can be made to improve outcomes and scale the program — even in the face of budgetary issues.

Solution 1 (Low Overhead): There are many computers in the workshop room. Instead of making everyone watch the PowerPoint video together, provide every workshop-attendee a pair of headphones, so that they can pause and rewind the video wherever they want.

Solution 2 (Low Overhead): Split the presentation into digestible chunks. After each video section have the workshop-attendees fill out the respective portion of the form. This tight coupling is often used in flipped classrooms and should make the process more self-directed.

Solution 3 (Low Overhead): Post the video and presentation online. Let people view the contents and fill out the form digitally at home.

Solution 4 (High Overhead): Translate the presentation into several key languages, i.e. Spanish, Vietnamese, Korean, Hindi, Mandarin. This is a one-time job but would improve accessibility tremendously.

Miscellaneous Observations

After experiencing the divorce workshop first hand, we decided to sit on a few of the court hearings that were open to the public. Before, we headed up the stairs to the court rooms, I stepped away to get some water. In the five minutes that I was gone, my teammates encountered a Latino women, who could not speak English well. She was asking, where she could find the police; and it was only after a few exchanges that my teammates realized that she was looking for “something to keep [a person] away”, i.e. a restraining order. They then showed her the route to the appropriate court office, but it was apparent that there needs to better outreach within local cultural and ethnic communities in both discussing the purpose of the court, the terminology surrounding the court, and the services that it can provide. This might help reduce friction for those seeking support, especially not native speakers. Perhaps outreach at libraries, churches, and grocery stores might help with this problem.

Overall, I was surprised to see how calm and collected the judges were at responding and guiding the proceedings. It seemed as if they really cared about both parties involved. The empathy demonstrated was quite moving, especially given how messy some of the court cases were.

Categories
Class Blog Design Research

A Design Prototype for Policy canvas


For our Design for Justice: Language Access class, our teaching team made a canvas to help a design team craft a forward plan for the projects they have been working on to advance language access in the courts through technology. The canvas can be useful to have a one-page hand-off for a policy partner to understand what the team is proposing, and how it can be taken to the next stage of piloting and evaluation.

Categories
Class Blog Design Research

Eviction design class

In late April 2018, Daniel Bernal and Margaret Hagan taught the first part of the d.school pop-up Design For Justice: Eviction. The class focused on how we might better empower people who have received eviction notices (specifically, in Arizona) to know their rights, their options, and to go to court to fight eviction.

In the class, our 2 teams focused on what intervention we might send in the mail to activate someone right after they have received an eviction notice, and what intervention we might point them to for greater support and guidance.

We worked in 2 phases. First, we did a recap of key insights, personas, players, and trends regarding the eviction process, user experience, and legal help resources in Arizona. We did this with calls to Arizona legal help leaders, a service designer who has been working on eviction help, and Daniel’s presentations on his research into eviction trends and strategies in Arizona.

Our second phase of work was brainstorming and prototyping. Our 2 teams focused on the different intervention points, to create an Idea Catalogue of possible ways to empower users through a mailer or a digital resource.

From this brainstorm, we critiqued the ideas with some help from our frequent collaborator, Kursat Ozenc, who is a design strategist. We will now write up these ideas, formalize them slightly, and invite a panel of legal, sociology, behavior change, technology, and design experts to give further feedback. From there, we will begin to develop first versions of several of the concepts that we will test with the public in our second half of the class.

Categories
Class Blog Design Research

User journey through Housing Court

In our classes, we map out different users’ journeys through the court. This is one of the Northeastern University student teams’ map, that abstracts different users’ journey through housing court in Boston.

Categories
Class Blog Design Research

Drawing of a Housing Court waiting room

A sketch from my notebook, while I was observing a waiting room in a Court Service center in Boston, for people who were waiting for help with housing cases.

Categories
Class Blog Design Research

Designing a more user-friendly legal system: notes from the field

Today we held our Prototyping Access to Justice class on-site at San Mateo County court house, specifically in and around the Self-Help Center and Family Law Facilitator.

The six student teams are all at the point where they have working prototypes that they want to test. They each have hypotheses about how they can make the legal system better for people without lawyers, and have embodied these hypotheses into a new tool — digital- or paper-based.

Instead of our usual class setting at a design studio at Stanford’s d.school, we created an impromptu class space in the Waiting Area on the 2nd floor of the Superior Court, where people are lining up to see people at the Self- Help Center, or are waiting to be called for an appointment. Some of the teams also set up testing spaces inside the Self Help Center, for when people had down-time after they had filled in forms or were waiting for next-steps.

The teams sought out people to give quick feedback, as well as longer experiential testing. They had interactive click-through prototypes of digital tools, paper mockups of new tools, posters and floor pathways for navigation, and tablets with new feedback forms. They had gift cards to give to user testers, to compensate for their time.

They tested their prototypes in small groups — with some taking notes (or translating into Spanish) and others leading the questions. They also had designer and developer coaches with them, to help them spot new opportunities and to run the testings.

Takewaways

So what were the takeaways? I was able to pull out some high-level insights during my debriefs with each team, as well as some specific points for improvement.

1. The forms are too many and too complex. This was a refrain that each team heard from users, no matter if their questions and prototype revolved around forms or not. If there is one big message that family law litigants have for courts, it is: make your forms easier to understand, and easier to complete.

There is an overload of paperwork, that is laid out in a way that does not make sense to people, and overwhelms them.

2. Little things about court — like parking, way finding, and security checks — have a big influence on people’s experience. Though we as lawyers might think about the legal procedure, forms, and hearings as the main determinants of people’s procedural justice and sense of fairness about court, there are other more pedestrian factors that shape their time with the legal system. If parking is difficult, expensive, or with a ticking timer, this puts an extra layer of pressure and confusion. If the security guards doing initial checks at the door are adversarial or cold, this raises the stress level of people and sets them off on a bad foot. If there is confusion about where they are going or how to get there, people lose confidence in themselves and feel that they are wasting time and not being strategic.

3. Pathways on the Floor should be implemented immediately. Our team Chukka-Ryorui, who are focused on improving navigation, put down a dotted red line from the building’s entrance to the Self Help Center on the 2nd floor. They used masking tape to make the line — and it took less than half an hour to implement. The feedback was universally positive. People were able to follow it and understand it without any complicated explanation. Users reported that they already are familiar with this pattern from hospitals, and appreciate having it here. They want bold color lines that they can follow easily, along with complementary signage.

We recommend that courts implement colored floor pathways for their most popular routes: Self Help Center, County Clerk, and Jury Services primarily. This is a relatively cheap intervention (vinyl floor paths are not that expensive) that can have a major impact.

4. “Out-of-Court Homework” Tasks must be Modeled + with reminders. As we heard from litigants and from staff, the most common fail points are around all the tasks that the litigant cannot do at the court, but must do outside. Getting service of process done, and done correctly, with the paperwork noted correctly with address, date, and other pieces of data is a very common failure. Also, remembering to get this done at the right time is also a fail point.

Some of the recommendations in this space is to have more reminder services that proactively reach out to the litigant to tell them they have to do this task before their service.

Also, the demand is for models of forms that have been done correctly, with annotations about why it is correct and how to do it right.

5. Be Mobile First, with guides and tools for the phone. The overwhelming majority of the people we spoke with have mobile phones, and are willing to use them to get legal tasks done. Tools must be built for phones, not desktops.

We are setting a bounty for the best new product that lets people understand processes and fill out forms using the phone. Even if this is not ideal — even if we wish that people would have the big screens of a desktop computer when they’re doing complex processes, they will be using mobile phones and paper handouts most of the time in practice.

Six. Maps are key. The team that was testing out a giant, slightly comic-based map of child custody process got great reviews. People responded well to their characters and to the map-based view.

They are thinking in terms of both a paper-based map (that could be a wall poster of a general ‘happy path’ of how the process works in the ideal, combined with a booklet of in-detail maps that include detours). And in terms of a digital map, that could be zoomed in.

People were able to instantly figure out the paper based map. They know how to use it. The digital map was harder — people were more hesitant to use it, and to know what to touch on the screen in order to see what would happen.

 —

More insights to come as our class proceeds — stay tuned for more of our design work and proposals for making Self Help Centers and the legal system more user-friendly.

I am also quite excited about setting up a more regular pop-up design lab on site at courts and other points in the legal system. To create more relevant and interactive designs, having input directly from litigants and court professionals is highly valuable. And doing the prototyping in the environment also helps the designs better mesh with this particular context, and what affordances and opportunities already exist there.