What methods can we use to create better innovations in the justice system?
How do we know what the most pressing needs of the community are?
How do we determine what the most promising designs and engagement strategies are?
How do we evaluate ideas’ impact early and often, to see if our proposed solutions work?
And how can we best embed solutions into a community, so people engage with them?
This Evaluation Methods section of our website collects together resources, experiments, and case studies that you can use to create better innovation in the justice system. We have begun with a methods page on User Testing and will continue to add more pages. Please explore below as well, to see some high-level outlines of what you might consider doing in your design and innovation work for access to justice.
We are collecting methods for the whole journey of an innovation process. It is meant for practitioners in courts and legal aid groups, for designers and technologists working on legal and social system innovation, and for academics who are studying this area.
The methods are grouped into use cases — depending on what types of knowledge and outcomes they produce. We have links to toolkits that demonstrate these methods in greater detail and give examples. We also link to model stories, that describe specific implementation of the methods for a particular context.
Evaluations of Early Stage Ideas + Prototypes
Methods for Early Stage Evaluation
What are the methods we can use to understand if our new ideas, rough prototypes, or proposals are feasible, viable, and desirable for stakeholders and the system?
Please see a fuller write-up on our User Testing methods page.
Here is a quick overview of tools — with more explored in depth below.
- Priority Sorts
- Over-the-shoulder observation
- Usage and interviews
- Dot votes at an idea fair
- Field Tests: counting usage
- Usability evaluation – Likert scale
- Dignity/procedural justice evaluation
- Comprehension quiz
- Affinity/aesthetics tests
When we make medium- or high-fidelity mockups of new documents, products, tech, services, or policy, we invite stakeholders in to review these prototypes’ details to give us critical feedback on them. Are they worth continuing on? How should they be edited? What must be improved before we invest more resources in them?
We have participants draw on things, rank them, and do user experience and usability assessments of them.
Often we give fictional money to participants, to distribute among different ideas.
Idea Review sheet
The Idea Review is an initial survey-like tool, to evaluate a concept that has been proposed for an Access Innovation.
We use it at the Legal Design Lab in order to judge ideas as they have emerged out of brainstorms, and after they’ve been sketched out in rough prototypes.
We present this heuristic tool both to subject matter experts and target users, for them to give feedback that we can easily process.
It’s a quick tool to get a mix of quantitative and qualitative feedback — and it’s best used when more than one idea is being reviewed.
Evaluations of the Intervention in the Field
Randomized Controlled Trials
A Randomized Controlled Trial (RCT for short) is an empirically rigorous way to determine if a thing you are doing (let’s call it an ‘intervention’) is having the effect you intended it to. It involves careful planning of what exactly you are testing — with identification of the ‘variables’ and the ‘conditions’, and having multiple testing groups with different variations of these variables.
The goal is to be able to have a group that has used the intervention you’re testing the efficacy of, and a similar group that has not used it — the ‘control’ group. Then you can collect data that will more clearly demonstrate whether the group with the intervention has noticeably different results than the control group.
Read more about RCTs and examples of how to run them here, at BetterEvaluation.
A quick way to get user feedback on an experience, service, or product, is to ask them to use a very simple rating on their ‘exit’. It can be on a text message line, on a tablet (for an in-person service), on a browser window (for a web-based service), or a paper sheet (again for in-person). Ideally, it will be a very quick and visual interface, that lets the user quickly put a rating on what they’ve just experienced.