Categories
AI + Access to Justice Current Projects

How AI is Augmenting Human-Led Legal Advice at Citizens Advice

Caddy Chatbot to Support Supervision of Legal Advisers and Improve Q&A

The Citizens Advice network in England and Wales is a cornerstone of free legal and social support, with a network of 270 local organizations operating across 2,540 locations. In 2024 alone, it provided advice to 2.8 million people via phone, email, and web chat. However, the rising cost-of-living crisis in the UK has increased the demand for legal assistance, particularly in areas such as energy disputes, welfare benefits, and debt management.

The growing complexity of cases, coupled with a shortage of experienced supervisors, has created a bottleneck. Trainees require more guidance, supervisors are overburdened, and delays in responses mean clients wait longer for critical help.

At the March 7, 2025 Stanford AI and Access to Justice research webinar, Stuart Pearson of the Citizens Advice SORT group (part of the broader Citizen Advice network in England) shared how they are using Generative AI (GenAI) to support their advisers responsibly — not replace them. Their AI system, Caddy, was designed to amplify human interaction, reduce response times, and increase the efficiency of advisers and supervisors. But critically, Citizens Advice remains committed to a human-led service model, ensuring that AI enhances, rather than replaces, human expertise.

The Challenge: More Demand, Fewer Experts

Historically, when a trainee adviser encountered a complex case, they would reach out to a supervisor via chat for guidance. Supervisors would step in, identify key legal issues, and suggest an appropriate course of action.

However, as demand for legal help surged, this model became unsustainable:

  • More cases required complex supervision.
  • Supervisors faced an overwhelming number of trainee queries.
  • Delays in responses led to bottlenecks in service delivery.
  • Clients experienced longer wait times.

The question was: Could AI alleviate some of the pressure on supervisors while maintaining quality and ethical standards?

The Caddy Solution: AI as a Support Tool, Not a Replacement

Caddy is a human-in-the-loop Q&A tool, that has a more junior helper using Caddy + a Supervisor Review to get the right answer to a user’s question.

How Caddy Works

Caddy was designed as an AI-powered assistant embedded in a group’s work software environment (like Microsoft 365 or Google Workspace). It allows trainees and supervisors to:

  1. Ask Caddy a question about a client’s legal issue, that has come through an adviser-client interaction.
  2. Caddy searches only trusted sources (such as 2 well-maintained websites, one from Gov.uk and the other Citizens Advice’s own knowledge base).
  3. Caddy generates a proposed response, including relevant links that is meant to guide the adviser in their interactions with the client.
  4. A supervisor reviews, edits, and approves the response. They have a box to put edits. And they have 2 buttons — thumbs up or thumbs down to either approve or reject the response.
  5. If the supervisor gives a thumbs up, Caddy lets the adviser know. It also tells the adviser the extra context given by the supervisor.
  6. The adviser relays the verified answer to the client. They can reframe or contextualize it, to ensure that the client is able to understand the details and rules.

Caddy does not replace human decision-making. Instead, it streamlines research, reduces supervisor workload, and increases response speed. It also does not communicate directly with a member of the public. It is drafting guidance for a service provider to use in their interactions with the user.

Core Ethical Principles

From the outset, Citizens Advice set clear ethical guidelines to ensure AI was used responsibly and inclusively:

1. Clients must always speak to a human.
2. Every AI-generated response must be reviewed by a supervisor.
3. Caddy only uses pre-approved, trusted sources.
4. Transparency: Advisers know when they are using AI-generated information.

This approach aligns with the UK Government’s Algorithmic Transparency Recording Standards (ATRS), ensuring AI applications are openly documented and publicly accountable.

Pilot Program: Testing AI’s Impact in Legal Advice

To assess Caddy’s real-world effectiveness, Citizens Advice ran a 4–6 week pilot in six local offices, measuring key near-term outcomes:

  • Accuracy of AI-generated responses
  • Time saved per case
  • Adviser feedback
  • Government evaluation on AI in public services

From the initial pilot testing, the group has been gathering responses that are largely positive and bode well for future use.

Accuracy rates were quite high. 80% of AI responses were supervisor-approved — Caddy provided correct answers 8 out of 10 times.

Time saved was another positive outcome. Response times dropped by 50% — from 10 minutes down to 4 minutes, allowing tens of thousands more clients to be helped.

For qualitative stakeholder feedback, advisers appreciated the efficiency but wanted more features. They had some ideas about improving performance, workflows, approval protocols, and other points.

The pilot responses helped identify some important drawbacks that the team is working on. Where was the 20% inaccuracy coming from? How can the advisers and users be more satisfied?

Limited Information Sources

Caddy was initially restricted to two websites. While these were high-quality sources, they weren’t always comprehensive — especially for specialized welfare or debt cases.

Now the team is exploring a possible solution. They’re considering expanding Caddy’s trusted source list while maintaining accuracy controls.

Issues with Vague Queries

AI struggled with unclear or incomplete questions, leading to lower-quality responses. A possible solution here is to train advisers on better prompting techniques and add follow-up question capabilities.

Supervisor Bottlenecks

Some advisers wanted the ability to approve AI responses without waiting for a supervisor in low-risk cases. The solution here involves exploring self-approval options for experienced advisers. They wouldn’t have to wait for a supervisor to approve before they can proceed with Caddy’s response.

Ensuring AI is Inclusive and Ethical

Citizens Advice took a proactive approach to public engagement and ethical AI governance. Many of their strategies can be used by other groups interested in the responsible development of AI.

Engaging Clients Through a “People’s Panel”

The team partnered with Manchester Metropolitan University, which had independently been creating an AI Advisory Panel of citizens. This university-led effort recruited members of the public to join the panel, and attend AI boot camps to educate the public about AI’s role in legal advice. Then they were presented with projects like Caddy, to get their feedback and then gave feedback on the tool, risks, ethics, and features.

Governance and Risk Management

The team also went through planning requirements and standards for its tool, by going through steps like:

  • Consequence Scanning: What are the risks of using AI in legal advice?
  • Planning for Trust & Reputation: Citizens Advice has existed since 1939 — maintaining public trust is paramount. Any new tech tool must enhance this reputation, rather than endanger it.
  • Constructing Shared Infrastructure for Scalability and Transparency: Caddy is open-source and available on GitHub so other nonprofits can build their own AI tools.

Future Developments: Expanding Caddy’s Capabilities

Here are some of the coming changes and improvements coming to Caddy in the near future.

Expanding Pilot to a National Rollout

Later this year, Caddy will roll out to the national Citizens Advice network, beyond its first pilot locations. This deliberate expansion will come after the team has had a chance to learn and address the issues that arose during the local pilots.

Conversational AI for More Dynamic Responses

Caddy will soon ask follow-up questions to refine responses in real-time. This can help address issues around vague questions that lead to answers that are not helpful or not accurate.

Building a Bank of “100% Accurate” Answers

The goal is to create a repository of vetted AI-generated responses that could be used without supervisor review. If successful, Caddy could be rolled out as a client-facing chatbot for basic legal queries.

AI-Powered Training Tools for Advisers

Here, the system could use call transcripts to auto-generate case notes and quality assessments. It could identify gaps in adviser knowledge by analyzing the types of questions they ask.

Or it could develop virtual clients for AI-powered role-playing training sessions.

Lessons from the Caddy Experiment: The Future of AI in Access to Justice

Caddy’s pilot program offers a blueprint for AI-assisted legal services. The key takeaways, for AI for legal help (at least at the beginning of 2025):

  • AI should be an assistive tool, not a replacement for human advisers. Especially as a generative AI pilot is in its first stage, it’s good to pilot it in an assistant role, with humans still providing substantial oversight over it.
  • Supervision and human oversight are crucial for ethical AI in legal services.
  • Training on prompting and follow-up questions improves AI accuracy.
  • Community involvement is essential — clients must have a say in AI’s role. Partnering with a university is a great way to get more client and community members’ input.
  • Transparency and governance are key to maintaining trust.

Citizens Advice’s journey with Caddy highlights that responsible AI can enhance access to justice while ensuring that legal support remains human-centered, ethical, and inclusive. As AI continues to evolve, the real challenge will be balancing innovation with trust, oversight, and accountability — a challenge that Citizens Advice is well-positioned to lead.

Categories
AI + Access to Justice Current Projects

Interest Form signup for AI & Access to Justice

Are you a legal aid lawyer, court staff member, judge, academic, tech developer, computer science researcher, or community advocate interested in how AI might increase Access to Justice — and also what limits and accountability we must establish so that it is equitable, responsible, and human-centered?

Sign up at this interest form to stay in the loop with our work at Stanford Legal Design Lab on AI & Access to Justice.

We will be sending those on this list updates on:

  • Events that we will be running online and in person
  • Publications, research articles, and toolkits
  • Opportunities for partnerships, funding, and more
  • Requests for data-sharing, pilot initiatives, and other efforts

Please be in touch through the form — we look forward to connecting with you!

Categories
AI + Access to Justice Current Projects

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.

There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).

See more about the conference at the main JURIX 23 website.

About the AI & A2J workshop

This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.

Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.

What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?

At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:

  • Identify the key challenges and opportunities for using AI to improve access to justice.
  • Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
  • Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
  • Develop proposals for how to hold AI models accountable for the public good.

Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.

Organizers of the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).

Are you generally interested in AI & Access to Justice? Sign up for our Stanford Legal Design Lab AI-A2J interest list to stay in touch.

Submit a paper to the AI & A2J Workshop

We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.

Workshop submissions may be about the topics described above, including:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Deadline extended to November 20, 2023

Submission Link: Submit your 4-12 page paper here: https://easychair.org/my/conference?conf=jurixaia2j

Notification: November 28, 2023

Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands

More about the JURIX Conference

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.

The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.