Is this legal form as user-friendly & effective as it could be?

Court and legal aid forms are meant to help a person through their justice journey, by eliciting key information from them & structuring it into the fields that the legal professionals need to evaluate the situation.

Forms can be evaluated with a combination of user-testing, data analytics, expert evaluation, and costing. This combo of measurements can determine if this form or technology tool is actually helping people do what it intends to, and in the most impactful and low-burden way.

For example, when evaluating a form, the evaluation process should use a combination of assessments that can measure a phase 1 of discovery and uptake, a phase 2 of usability and usefulness of the thing itself, and a phase 3 of effectiveness in getting the person towards a just resolution of their legal problem.

1. “Pre-Form” Uptake and Discovery Assessment

  • Discoverability and reach of the form. How many people who are likely to need this form are actually finding it? This can be evaluated through calculations of the expected audience with this legal problem or in need of this form/tool. Then compare this to analytics or administrative data numbers about the number of people who visit this form page or access it.
  • Usage rates of the form. Of those who find the form, how many actually engage with it – -and try to fill it in or file it? This can be tracked through websites’ analytics or in-person clinic visits, to see rates of bounces, drops, or incompletion.

2. Metrics around the “Form Itself”

  • Usability of the form as a way to enter in the necessary information. How many people actually complete the form? How many do it in the correct, intended way? This can be evaluated through user testing sessions, in which people attempt to use the form and then give qualitative rankings and quantitative assessments about usability.
  • Readability of the form questions. How easy is it for a person to understand what the form is asking for? This can be measured through readability scoring tools, and in-person evaluation.
  • Cost of filling in the form. How expensive or burdensome is it to fill in the form and supply all the required things necessary to complete it? Do you need to hire someone or go through a service-seeking journey to be able to fill it in? This can be evaluated through interviews with past users to understand their costs, as well as user testers to do this in a lab simulation.
  • Time to fill in the form. How long does it take to fill in? The time can be measured through website analytics of form tools, clinic and self-help center management records, or user tests in lab simulations.
  • Error rates in complete and comprehensive form info. How many times do people fill in information incorrectly, not supplying the data that the form was intended to gather? How many times do they choose things arbitrarily, not because they actually want to (like opting to raise all defenses and claims listed out)? This can be measured by lab simulations with test users, or by sampling forms that have been created or filed to evaluate them for errors or arbitrary responses.
  • Procedural Justice of the form. Does the person then feel the justice system is transparent, fair, and open to them? Or does the form experience make them feel the opposite? This can be measured with exit surveys of actual users of the form, or with lab simulations with test users.

3. Metrics around “Post-Form” Process & Decisions

  • Filing rates. How many of the completed forms are filed with the court? This can be measured by comparing analytics about form tool usage or form downloads, versus filing rates in court administrative data.
  • The acceptance rate of the form’s output. Of the filed forms, how many are accepted by the clerk and entered into the record? This can be measured with court administrative data about rejection rates.
  • Judicial usability of the form entries. When the judge and clerk are reviewing the case file, triaging it to the correct process, and making decisions about outcomes — is the information in the form usable and useful to them? This can be measured through benchmarking evaluation sessions with clerks and judge teams, in which they review a sample of filings to evaluate them based on criteria they might usually use as informal heuristics when reviewing a case. These sessions can help them formally identify the criteria that makes a filing usable and useful to them, which the team can then use to score other filings.
  • Substantive Justice outcomes. Do people who use the form better represent their case, claims, and evidence? Do they get the judicial time to spend more time on their case and take care in applying the law correctly? Do they raise more claims and defenses persuasively? Do they end up with judgments more often in their favor? Substantive justice outcomes can be measured by case file reviews, to see how many claims and defenses are raised, how many of them moved forward with serious consideration by the judicial decision-makers, and how many of them were ultimately decided in favor of the litigant. It might also be assessed with exit surveys with litigants about whether they felt their problem was resolved and they received a just outcome.

Make your form more user-friendly

Form Experience Design Benchmark Principles

Are you concerned with your form’s performance?

You can use this design guide to improve the usability and lower the burdens of using a form. A redesign can also increase uptake of the form in the first stage.

Margaret Hagan (of the Stanford Legal Design Lab) presented a design guide to court forms (especially in their paper, PDF version), with benchmark principles for these legal documents in particular.

These benchmark principles include:

  • Have a clear navigation scheme & glance-able structure. Can a person ‘get’ the key zones of info & tasks within a 1-minute glance-over?
  • Be calm & readable. Don’t overcrowd with info and tasks. Does it make the person feel more capable or less? Does it have distinct zones of work?
  • Support stressed-out users. Does it have off-ramps to info, examples, & assistance — especially near the hardest tasks?
  • Be easy to fill in. Have consistent, ample space to fill info in. Make it clear through spacing, boxes, lines about what is ‘right’ and ‘normal’ to put in.
  • Don’t prioritize ‘insider’ tasks & info over the user’s. Are user tasks in high-priority places? Are insider tasks put in discrete, low-importance places?
MargaretEvaluating Legal Forms