What Legal Help Actually Requires: Building a Task Taxonomy for AI, Research, and Access to Justice
In December 2025, I presented a new piece of research at the JURIX Conference in Turin, Italy, as part of the workshop on AI, Dispute Resolution, and Access to Justice. The workshop brought together legal scholars, technologists, and practitioners from around the world to examine how artificial intelligence is already shaping legal systems—and how it should shape them in the future.






My paper focuses on a deceptively simple question: What do legal help teams and consumers actually do when trying to resolve legal problems?
This question sits at the heart of access to justice. Around the world, billions of people face legal problems without sufficient help. Courts, legal aid organizations, and community groups work tirelessly to close this gap—but the work itself is often invisible, fragmented, and poorly documented. At the same time, AI tools are rapidly being developed for legal use, often without a clear understanding of the real tasks they are meant to support.

The work I presented in Turin proposes a way forward: a Legal Help Task Taxonomy—a structured, shared framework that defines the core tasks involved in legal help delivery, across jurisdictions, problem types, and service models. (See a first version here at the JusticeBench site or at our Airtable version.)

This blog post explains why that taxonomy matters, how it was developed, and what we discussed at JURIX about making it usable and impactful—not just theoretically elegant.
Why a Task Taxonomy for Legal Help?
Legal help work is often described in broad strokes: “legal advice,” “representation,” “self-help,” or “court assistance.” But these labels obscure what actually happens on the ground.
In reality, legal help consists of dozens of discrete tasks:
- identifying what legal issue is present in a messy life situation,
- explaining a confusing notice or summons,
- calculating deadlines,
- selecting the correct form,
- helping someone tell their story clearly,
- preparing evidence,
- filing documents,
- following up to ensure nothing is missed.
Some of these tasks are done by lawyers, others by navigators, librarians, court staff, or volunteers. Many are done partly by consumers themselves. Some are repetitive and high-volume; others are complex and high-risk.
Despite this, there has never been a shared, cross-jurisdictional vocabulary for describing these tasks. This absence makes it harder to:
- study what legal help systems actually do,
- design technology that fits real workflows,
- evaluate AI tools responsibly,
- or collaborate across organizations and states.
Without task-level clarity, we end up talking past each other—using the same words to mean very different things.
How the Task Taxonomy Emerged
The Legal Help Task Taxonomy did not start as a top-down academic exercise. It emerged organically over several years of applied work with:
- legal aid organizations,
- court self-help centers,
- statewide legal help websites,
- pro bono clinics,
- and national access-to-justice networks.
As teams tried to build AI tools, improve workflows, and evaluate outcomes, the same problem kept arising: we couldn’t clearly articulate what task a tool was actually performing.
Was a chatbot answering questions—or triaging users?
Was a form tool drafting documents—or just collecting data?
Was an AI system explaining a notice—or giving legal advice?
To address this, we began mapping tasks explicitly, using practitioner workshops, brainstorming sessions, and analysis of real workflows. Over time, patterns emerged across jurisdictions and issue areas.
The result is a taxonomy organized into seven categories of tasks, spanning the full justice journey:
- Getting Brief Help (e.g., legal Q&A, document explanation, issue-spotting)
- Providing Brief Help (e.g., guide writing, content review, translation)
- Service Onboarding (e.g., intake, eligibility verification, conflicts checks)
- Work Product (e.g., form filling, narrative drafting, evidence preparation)
- Case Management (e.g., scheduling, reminders, filing screening)
- Administration & Strategy (e.g., data extraction, grant reporting)
- Tech Tooling (e.g., form creation, interview design, user testing)
Each task is defined in plain language, with clear boundaries. The taxonomy is intentionally general—not tied to one legal issue or country—so that teams can collaborate on shared solutions.


What We Discussed at JURIX
Presenting this work at JURIX was particularly meaningful because the audience sits at the intersection of law, AI, and knowledge representation. The discussions went beyond whether a taxonomy is useful (there was broad agreement that it is) and focused instead on how to make it actionable.
Three themes stood out.
1. Tasks as the Right Unit for AI Evaluation
One of the most productive conversations was about evaluation. Rather than asking whether an AI system is “good at legal help,” the taxonomy allows us to ask more precise questions:
- Can this system accurately explain documents?
- Can it safely calculate deadlines?
- Can it help draft narratives without hallucinating facts?
This task-based framing makes it possible to benchmark AI systems honestly—recognizing that some tasks (like rewriting text) may be feasible with general-purpose models, while others (like eligibility determination or deadline calculation) require grounded, jurisdiction-specific data.
2. Usability Matters More Than Completeness
Another theme was usability. A taxonomy that is theoretically comprehensive but practically overwhelming will not be adopted.
At the workshop, we discussed:
- staging tasks for review in manageable sections,
- writing definitions in practitioner language,
- allowing feedback and iteration,
- and supporting partial adoption (teams don’t need to use every task at once).
The goal is not to impose a rigid structure, but to create a living, testable framework that practitioners recognize as reflecting their real work.
3. Interoperability and Shared Infrastructure
Finally, we discussed how a task taxonomy can serve as connective tissue between other standards—such as legal issue taxonomies, document schemas, and service directories.
By aligning tasks with standards like LIST, Akoma Ntoso, and SALI, the taxonomy can support interoperability across tools and datasets. This is especially important for AI development: shared task definitions make it easier to reuse data, compare results, and avoid duplicating effort.
What Comes Next
The taxonomy presented at JURIX is not the final word. It is a proposal—one that is now moving toward publication and broader validation.
Next steps include:
- structured review by legal help professionals,
- refinement based on feedback,
- use in AI evaluation benchmarks,
- and integration into JusticeBench as a shared research resource.
Ultimately, the aim is simple but ambitious: to make legal help work visible, describable, and improvable.
If we want AI to genuinely advance access to justice—rather than add confusion or risk—we need to start by naming the work it is meant to support. This task taxonomy is one step toward that clarity.




































