User Testing

How do we get more people’s voices, experiences, and priorities to be included in how our justice system works?

User testing is one group of methods that can help us understand what people want from services & tech. It can help us practically understand what people can use. It can help us strategically to put diverse people’s needs first — and imagine new offerings that better work for them.



Before your organization invests in building a new technology or service, user testing can help you verify if it will be valuable enough to develop. There are multiple methods to use:

  1. Feedback Interview: show your new design prototype to a stakeholder and interview them about how usable, useful, and engaging it is.
  2. Over-the-shoulder observation: give them the prototype and watch as they try to use it. Note down breakdowns confusions, and payoffs. You can possibly give them a persona card to help them understand what POV they are using it from.
  3. Survey instruments: have the tester fill out a short survey, usually with Likert scale responses of 1-7 (levels of agreement). The questions can draw from surveys around Usability, Design for Dignity, and Procedural Justice.
  4. Comprehension Testing: have people use the design prototype, and then after they are done, give them a quiz to measure how much of the important content they have understood and retained.
  5. Idea book: make concept posters or other high-level presentations of your various ideas or features. Put them in a single book, like a catalog. Have the testers look through and rank which of the ideas they’d like and why.
  6. Priority Sort: have people look at a wide variety of high-level ideas and judge their relative value. They will put them in buckets, spending pretend money on the ideas.



Our team wrote this short book, User Testing New Ideas, to walk through exactly how we ran user testing for new traffic court-oriented redesigns.  We captured the steps, tools, and ethical considerations we took when doing early-stage testing of new prototypes.



In this 2018 illustrated article, “Doing User Research in the Courts on the Future of Access to Justice,” we profile in detail how we ran a certain type of user testing — prioritizing high-level ideas that had been proposed to improve court for litigants. We did this in court self-help centers, and detail our methodology, tools, and results. 

Aldunate, G. et al., 2018. . Legal Design and Innovation. Available at: 

Why use Priority Ranking?

Priority Ranking of high-level ideas can let your team gather a large number of stakeholders’ feedback about which ideas should move forward to the agenda.

You can put the ideas on cards or post-its, and the users can individually rank them — or a working group can together come to a consensus about which of the High-Medium-Low-No value categories each idea should be placed into.


When we ask people for short feedback to our new technology offerings, service designs, or information design, we use an evaluation instrument that we’ve created. It’s a short survey evaluation that incorporates assessments from established survey instruments to evaluate software’s usability, to get citizens’ feedback on government services, and to assess people’s sense of procedural justice and dignity while using an offering.

For each of these questions, we use a Likert scale, of 0 (Disagree Strongly) to 7 (Agree Strongly).

  1. I think that I would like to use this system often to help me [insert objective: communicate with the court, navigate court process, etc.]
  2. I thought the [design name] was easy to use.
  3. I felt very confident using the [design name].
  4. This will help me to get through court more efficiently.
  5. This gave me clear, helpful information.
  6. I felt that I was understood using the tablet’s translations.  
  7. I wish I could take [design name] around [place/system name] with me.
  8.  I felt the [design name] provided most of the information I was looking for. 
  9. I felt that the [design name] could be improved.


While trying particular methods, you might give your tester a ‘persona card’, so they know whose point of view they are looking at the ranking through. 

Often in very early stage testing we have people test from a different person’s perspective. We give them ‘persona’s to play, so that they scrutinize the design from these various points of view. We know that they are not as good as having a wide range of people from these different backgrounds, but it is a test-run of this — to see what issues we can spot with a design before investing in wider testing.

Here are some example personas that we give to people:

  • Persona 1: 22 year old digital native, very confident in technology, prefers to text over phone calls and sometimes even over in-person communication, feels higher confidence in their ability to figure things out especially using Google and looking through social media, but feels relatively out of their depth in the legal system
  • Persona 2: 65 year old, who is a first time user of a legal system, but has dealt with lots of other complex social systems like with health insurance, social security, taxes, etc. They are definitely not very confident with technology, but do email a lot, still uses AOL, just moved to the most basic smartphone this year upon the insistence of their kids.
  • Persona 3: 42 year old who has been to court several times to deal with divorce, custody, and parenting plans. They have had enough repeat visits to feel confident about how to navigate the system and the relationships. They feel literate, but still want support to get things right
  • Persona 4: 31 year old who has very limited English proficiency. They have been through immigration proceedings with the help of family and friends before, but they definitely don’t feel confident in going to court by themselves because of the language and because of the unfamiliarity of the system. 
  • Persona 5: 18 year old who is coming with their older family member to help translate for them in court. They are literate in English, and feel confident with technology. But they are not familiar with the legal system at all. They grew up in the US, and feel they can also help with the cultural translation for their family members



This short book from 2017 encapsulates some of the design training that we give to our students before they go into the field to conduct interviews or testing with members of the community.



O’Neil, Daniel X, and Smart Chicago Collaborative. Civic User Testing Group as a New Model for UX Testing, Digital Skills Development, and Community Engagement in Civic Tech. Chicago: The CUT Group, 2019,


Hagan, Margaret. “Community Testing 4 Innovations for Traffic Court Justice.” Legal Design and Innovation, 2017. 

Aldunate, Guillermo, Margaret Hagan, Jorge Gabriel Jimenez, Janet Martinez, and Jane Wong. “Doing User Research in the Courts on the Future of Access to Justice.” Legal Design and Innovation. Stanford, CA, July 2018. 


Maier, Andrew, and Sarah Eckert. “Introduction to Remote Moderated Usability Testing, Part 2: How.” 18F, US General Services Administration agency, November 20, 2018.

18F. “18F Methods: A Collection of Tools to Bring Human-Centered Design into Your Project.” US General Services Administration, 2020. 


Hagan, Margaret. “Participatory Design for Innovation in Access to Justice.” Daedalus 148, no. 1 (2019): 120–27. .

Abstract: Most access-to-justice technologies are designed by lawyers and reflect lawyers’ perspectives on what people need. Most of these technologies do not fulfill their promise because the people they are designed to serve do not use them. Participatory design, which was developed in Scandinavia as a process for creating better software, brings end-users and other stakeholders into the design process to help decide what problems need to be solved and how. Work at the Stanford Legal Design Lab highlights new insights about what tools can provide the assistance that people actually need, and about where and how they are likely to access and use those tools. These participatory design models lead to more effective innovation and greater community engagement with courts and the legal system.


Hagan, M.D., 2018. “A Human-Centered Design Approach to Access to Justice: Generating New Prototypes and Hypotheses for Intervention to Make Courts User-Friendly.” Indiana Journal of Law and Social Equality, 6(2), pp.199–239. 

Abstract: How can the court system be made more navigable and comprehensible to unrepresented laypeople trying to use it to solve their family, housing, debt, employment, or other life problems? This Article chronicles human-centered design work to generate solutions to this fundamental challenge of access to justice. It presents a new methodology: human-centered design research that can identify key opportunity areas for interventions, user requirements for interventions, and a shortlist of vetted ideas for interventions. This research presents both the methodology and these “design deliverables” based on work with California state courts’ Self Help Centers. It identifies seven key areas for courts to improve their usability, and, in each area, proposes a range of new interventions that emerged from the class’s design work. This research lays the groundwork for pilots and randomized control trials, with its proposed hypotheses and prototypes for new interventions, that can be piloted, evaluated, and — ideally — have a practical effect on how comprehensible, navigable, and efficient the civil court system is.


MargaretUser Testing