How do we know what the most pressing needs of the community are? How do we determine what the most promising designs and engagement strategies are?
How do we evaluate ideas’ impact early and often, to see if our proposed solutions work? And how can we best embed solutions into a community, so people engage with them?
This Evaluation Methods section of our website collects together resources, experiments, and case studies that you can use to create better innovation in the justice system.
- Please explore early-stage evaluation of ideas with resources on our User Testing page.
- Look to our Indicators page to consider the outcomes and metrics you can be measuring, to determine if your intervention is getting to access to justice.
Please explore below as well, to see some high-level outlines of what you might consider doing in your design and innovation work for access to justice
We are collecting methods for the whole journey of an innovation process. It is meant for practitioners in courts and legal aid groups, for designers and technologists working on legal and social system innovation, and for academics who are studying this area.
The methods are grouped into use cases — depending on what types of knowledge and outcomes they produce. We have links to toolkits that demonstrate these methods in greater detail and give examples. We also link to model stories, that describe specific implementation of the methods for a particular context.
Early Stage Evaluation
Methods for Early Stage Evaluation
What are the methods we can use to understand if our new ideas, rough prototypes, or proposals are feasible, viable, and desirable for stakeholders and the system?
Please see a fuller write-up on our User Testing methods page.
Here is a quick overview of tools — with more explored in depth below.
- Priority Sorts
- Over-the-shoulder observation
- Usage and interviews
- Dot votes at an idea fair
- Field Tests: counting usage
- Usability evaluation – Likert scale
- Dignity/procedural justice evaluation
- Comprehension quiz
- Affinity/aesthetics tests
When we make medium- or high-fidelity mockups of new documents, products, tech, services, or policy, we invite stakeholders in to review these prototypes’ details to give us critical feedback on them. Are they worth continuing on? How should they be edited? What must be improved before we invest more resources in them?
We have participants draw on things, rank them, and do user experience and usability assessments of them.
Often we give fictional money to participants, to distribute among different ideas.
Idea Review sheet
The Idea Review is an initial survey-like tool, to evaluate a concept that has been proposed for an Access Innovation.
We use it at the Legal Design Lab in order to judge ideas as they have emerged out of brainstorms, and after they’ve been sketched out in rough prototypes.
We present this heuristic tool both to subject matter experts and target users, for them to give feedback that we can easily process.
It’s a quick tool to get a mix of quantitative and qualitative feedback — and it’s best used when more than one idea is being reviewed.
Burden Cost calculations
Many federal agencies are required to do this Burden Cost evaluation whenever they make changes to procedures involved in users’ interactions with social security, taxes, or disability processes, or who can access food stamps. It’s required at the federal level by the Paperwork Reduction Act, which requires an agency to measure the effect of a new procedure change by looking at:
- The time it takes for an average person to fill in the given form or do the required task
- The time it takes to prepare the documents or get the information to correctly fill in the form/do the task
- Assume that this costs a person $15/hour
- Calculate the number of people who will have to go through this on average
This basic calculation will allow you to produce a numeric amount of how much this form or step costs ‘the public’ :
( (Time to fill + prep)*$15/hour) )*# of people doing this = Burden Cost
Having Burden Costs — or comparing them across different proposed forms — can be a very influential way to pressure policy-makers or support an argument around process simplification. In the access to justice space, you could be do Burden Cost calculations that include:
- Time to search for and find the correct form
- Time to look up and understand the words being used in the form
- (Optional: time to get help at Self Help Center, including waiting in line and being seen)
- (Optional: time to call legal aid, get screened, see if they can help you, be helped)
- Time to read the form and fill in the questions
- Time to prep/make copies/ get ready for filing
- Time to file it in the courthouse
- Time to deal with any problems with filing
You can calculate these time costs by doing these steps yourselves, or having research participants do some/all of these. It could also be done by gathering data from experts with data or informed estimates of these timings.
Benchmarking against others
Another method to evaluate an early prototype is Capability Improvement. This technique can help you determine if your intervention (like a new form, website, or app) might have on a person’s ability to navigate their legal problem and solution.
This technique can help you determine if your intervention can make people more likely to engage with the legal tasks, more informed about the correct info, and more strategic in making choices that are in their best interest.
Most early-stage Capability Improvement tests focus on measuring usability, user experience, and knowledge-testing. This means having a small number of people, representative of the target population, use your new intervention (and possibly some other versions). Your testing team will gather
Qualitative Information on engagement and capability:
- What they like,
- What they find confusing,
- What they skip or ignore,
- What they complain about
- What they say improves their sense of dignity, knowledge, or likelihood to use thing/recommend it to friends
In addition, you gather Quantitative Information on changes to the testers’ legal capabilities:
- Do they fully engage with all of the tasks?
- Do they complete the process?
- Do they pay attention to all that is being communicated to them (measured by eye-tracking or page-recording)
- After they use the intervention, do they answer key Knowledge Questions correctly (in a quiz)?
- After they use the intervention, do they have a concrete, and expert-approved strategy of next steps?
- How long do they have to spend to understand the information, and answer questions/form strategies correctly?
Often Capability Testing is done by giving participants scenarios, and having them try to answer ‘quiz’ questions that test their knowledge and their strategy-making. For example, this is a Legal Capability evaluation from Catrina Denvir in a study of legal education online. She gave recruited participants a fictional scenario, with a ‘persona’ to play. Then she asked them legal knowledge & strategy questions, to determine if a new website intervention improved their ability correctly answer the questions. The quiz questions can help more directly measure the impact of an intervention in improving a person’s legal knowledge — and thus their capability to deal with their justice problem.
Evaluations of the Intervention in the Field
Randomized Controlled Trials
A Randomized Controlled Trial (RCT for short) is an empirically rigorous way to determine if a thing you are doing (let’s call it an ‘intervention’) is having the effect you intended it to. It involves careful planning of what exactly you are testing — with identification of the ‘variables’ and the ‘conditions’, and having multiple testing groups with different variations of these variables.
The goal is to be able to have a group that has used the intervention you’re testing the efficacy of, and a similar group that has not used it — the ‘control’ group. Then you can collect data that will more clearly demonstrate whether the group with the intervention has noticeably different results than the control group.
Read more about RCTs and examples of how to run them here, at BetterEvaluation.
A quick way to get user feedback on an experience, service, or product, is to ask them to use a very simple rating on their ‘exit’. It can be on a text message line, on a tablet (for an in-person service), on a browser window (for a web-based service), or a paper sheet (again for in-person). Ideally, it will be a very quick and visual interface, that lets the user quickly put a rating on what they’ve just experienced.