AI & Access to Justice Initiative

The Stanford Legal Design Lab has launched the AI and Access to Justice Initiative to conduct user research, system evaluation, network coordination & innovative R&D in this exciting space.

As more justice professionals & community members become aware of AI, the Legal Design Lab is doing cutting-edge R&D work about how AI systems are performing when people try to use them for legal problem-solving — and how we can build smarter, more responsible AI for access to justice.

If you’re interested in joining our AI+A2J Interest List, please sign up here to stay in touch.

Explore the Legal Design Lab’s AI & Access to Justice Initiative

This webpage is our main hub for the AI-A2J Initiative.

Choose from the categories here to find more about our specific initiatives on AI & Access to Justice.

If you are interested in AI & Access to Justice, sign up using this form to stay updated with work, opportunities and events in this space.

Explore the Legal Design Lab’s current projects on AI & Access to Justice.

Sign up to join our network and see what events are coming up around AI & Access to Justice.

Find the latest academic courses the Lab is offering on AI and Access to Justice, as well as past class reports.

Explore our Lab’s latest research findings, other groups’ publications, and research opportunities.

Projects on AI & Access to Justice

Along with understanding possible problems & regulation of AI — we are also excited to explore how AI models and tools might improve the justice system.

This includes doing research, community design, and design/development of new AI that can responsibly help empower people regarding their legal rights — and empower service providers to make the justice system more accessible and equitable.

The Legal Design Lab is working both on the infrastructure of responsible AI (gathering data, creating labeled datasets, establishing taxonomies, creating benchmarks, and finding pilot partners) as well as the design & development of new AI tech/service pilots.

User Research into AI for Legal Help

The Legal Design Lab, including with its Policy Lab/d.school class AI for Legal Help, has been interviewing adults across America about if & how they would use AI for legal-problem solving.

With our online and in-court interviews, we ask people about — if they had a fictional legal problem — if they would use AI to address it, and if they do, we watch over their shoulder as they try to use it to address this fictional legal problem.

This research aims to identify:

  • if people imagine they will find AI helpful and trustworthy for legal problem-solving, in the abstract
  • as they do have an interaction with an AI platform if they do indeed find it helpful and trustworthy
  • what people identify as making an AI’s responses helpful, what makes it harmful, and how much warning or disclosures they want
  • what the ideal AI tool might be, from different people’s perspectives.

We will be publishing our research regularly, and continuing to run interviews as the AI platforms change and as people’s perceptions and use of AI change.

See our interview data dashboard & raw data at our AI+A2J User Research page.

See our publications in the Research & Publications section.

Task List for AI & A2J

What are the specific legal tasks, that AI could perform in order to advance access to justice?

The Legal Design Lab is compiling a central list of the particular activities that lawyers, litigants/users, and other service providers must do in order to successfully solve civil legal problems like evictions, divorces, debt collections, traffic tickets, etc.

This task list covers the jobs that people do on their own to make sense of their legal problem and begin making a plan, the legal tasks around paperwork and service provision, the court staff’s tasks like screening case filings and triaging cases to the right procedural tracks, and negotiation tasks like mediating settlement conversations and reviewing possible settlement agreements.

See our current Legal AI Task List here.

Quality Evaluation for AI Legal Q-and-A

What makes an AI answer to a legal question good or bad?

Our Lab (including with our amazing students in Winter Quarter’s AI for Legal Help class) has been establishing a common set of practical, measurable criteria by which we can evaluate AI Legal Q&A.

This work has included:

  • Interviewing justice professionals (legal aid lawyers, law help website administrators, live chat operators, hotline staffers, court help center staff, law professors, bar officials, and more) about what criteria are important (or not) to prioritize when evaluating AI’s answers to normal people’s questions about their legal problems.
  • Making a draft list of the top criteria, by which justice professionals say we should be measuring AI models’ Legal Q-and-A performance
  • Building a ‘rating game’ — with Q-and-A pairs of real people’s legal questions (taken from our past user interviews) and different AI models’ answers, and then putting the expert-approved criteria into the game. (All of this built on Learned Hands)
  • Having justice professionals play this rating game, and narrate aloud why they are rating the Q-and-A pairs as they do. This helps us both get a labeled dataset of which AI answers are good or bad, and how they perform on more specific criteria — but also help us refine our criteria & understand the logic behind the professionals’ ratings.
  • Exploring which of the criteria might be evaluated automatically, and exactly how we could safely automate the evaluation.

As we finalize this work, we will be sharing more about the academic publications, white papers, and technical proposals that emerge out of our work.

See more details at our AI+A2J Metrics page here.

Learned Hands game to label people’s legal issues

Learned Hands is an online game to build labeled datasets, machine learning models, and new applications that can connect people online to high-quality legal help. Our team at Legal Design Lab partnered with the team at Suffolk LIT Lab to build it, with the support of the Pew Charitable Trusts.

Playing the Learned Hands game lets you label people’s stories with a standardized list of legal issue codes. It’s a mobile-friendly web application that you’re welcome to come play and earn pro bono credit with.

The game produces a labeled dataset of people’s stories, tagged with the legal issues that apply to their situation. This dataset can be used to develop AI tools like classifiers to automatically spot people’s issues.

Read more about the Learned Hands project here.

Play the Learned Hands game here.

See the labeled dataset at Suffolk LIT Lab’s site.

AI/Legal Help Problem Incident Database

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database.

We will be making this database available in the near-future, as we collect more records & review them.For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:

  • incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
  • hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
  • irrelevant, distracting, or off-topic information;
  • misrepresentation of the law;
  • overly simplified information, that loses key nuance or cautions;
  • otherwise doing something that might be harmful to a person trying to get legal help.

You can send in any incidents you’ve experienced here at this form.

Recent Posts on AI & Access to Justice

3 Shifts for AI in the Justice System: LSC 50th Anniversary presentation

In mid-April, Margaret Hagan presented on the Lab’s research and development efforts around AI and access to justice at the Legal Services Corporation 50th anniversary forum. This large gathering of legal aid executive directors, national justice leaders, members of Congress, philanthropists, and corporate leaders celebrated the work of LSC & profiled future directions of legal services. Margaret was on a panel along with legal aid leader Sateesh…

Continue Reading

User interviews on AI & Access to Justice

As we continue to run interviews with people from across the country about their possible use of AI for legal help tasks, we wanted to share out what we’re learning about people’s thoughts about AI.Please see the full interactive Data Dashboard of interview results here. Below, find images of the data dashboard. Follow the link above to interact more with the data. We will also maintain a…

Continue Reading

AI as the next frontier of making justice accessible

Last week, Margaret had the privilege of presenting on the lab’s work on AI and Innovation at the American Academy of Arts and Sciences in Cambridge, Massachusetts. As a part of the larger conference of Making Justice Accessible, her work was featured on the panel about new solutions to improve the civil justice system through technology. She discussed how the Lab’s current research and development work around…

Continue Reading

AI & Justice Workers

At the Arizona State University/American Bar Foundation conference on the Future of Justice Work, Margaret Hagan spoke on if and how generative AI might be part of new service and business models to serve people with legal problems. Many in the audience are already developing new staffing & service models, that combine traditional lawyer-provided services with help provided by community-justice workers. In the conference’s final session, the…

Continue Reading

User Research Workshop on AI & A2J

In December 2023, our lab hosted a half-day workshop on AI for Legal Help. Our policy lab class of law students, master students, and undergraduates presented their user research findings from their September through December research. Our guests, including those from technology companies, universities, state bars, legal aid groups, community-based organizations, and advocacy/think takes, all worked together in break-out sessions to tackle some of the big policy…

Continue Reading

Schedule for AI & A2J Jurix workshop

Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions. We are pleased to announce our final schedule for the workshop: Schedule for the AI & A2J…

Continue Reading

Presentation to Indiana Coalition for Court Access

On October 20th, Legal Design Lab executive director presented on “AI and Legal Help” to the Indiana Coalition for Court Access. This presentation was part of a larger discussion about research projects, a learning community of judges, and evidence-based court policy and rules changes. What can courts, legal aid, groups, and statewide justice agencies be doing to best serve people with legal problems in their communities? Margaret’s…

Continue Reading

Interest Form signup for AI & Access to Justice

Are you a legal aid lawyer, court staff member, judge, academic, tech developer, computer science researcher, or community advocate interested in how AI might increase Access to Justice — and also what limits and accountability we must establish so that it is equitable, responsible, and human-centered? Sign up at this interest form to stay in the loop with our work at Stanford Legal Design Lab on AI…

Continue Reading

Report a problem you’ve found with AI & legal help

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database. We will be making this database available in the near-future, as we collect more records & review them. For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat,…

Continue Reading

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice. There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending. The…

Continue Reading

AI Platforms & Privacy Protection through Legal Design

How can regulators, researchers, and tech companies proactively protect people’s rights & privacy, even as AI becomes more ubiquitous so quickly? by Margaret Hagan, originally published at Legal Design & Innovation This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe. I was at a table focused on the intersection of Legal Design,…

Continue Reading

Opportunities & Risks for AI, Legal Help, and Access to Justice

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems. Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law? Could AI tools help people efficiently and…

Continue Reading

American Academy event on AI & Equitable Access to Legal Services

The Lab’s Margaret Hagan was a panelist at the May 2023 national event on AI & Access to Justice hosted by the American Academy of Arts & Sciences. The event was called AI’s Implications for Equitable Access to Legal and Other Professional Services. It took place on May 10, 2023. Read more about the American Academy’s work on justice reform here. More about the May event from…

Continue Reading

AI Goes to Court: The Growing Landscape of AI for Access to Justice

By Jonah Wu Student research fellow at Legal Design Lab, 2018-2019 Table of Contents1. Can AI help improve access to civil courts?2. Background to the Rise of AI in the Legal System3. What could be? Proposals in the Literature for AI for access to justice4. What is happening so far? AI in action for access4.1. Predicting settlement arrangements, judicial decisions, and other outcomes of claims4.2. Detecting abuse…

Continue Reading

Houston.ai access AI

Legal Server has a project Houston.AI, a new set of tools that allows for smarter intake of people, finding of their issues, and referring them to the right support. What? Houston.AI is a web-based platform designed to help non-profit legal aid agencies more effectively serve those who cannot afford attorneys. Comprised of a series of micro-services leveraging machine learning, artificial intelligence and expert systems Houston.AI is designed…

Continue Reading

Courses on AI & Access to Justice

Our Lab team is teaching interdisciplinary courses at Stanford Law School and design school on how AI can be responsibly built to increase access to justice, or what limits might be put on it to protect people.

Please write to us if you are interested in taking a course, or being a partner on one.

Autumn-Winter 23-24 AI for Legal Help 809E

In Autumn-Winter quarters 2023-24, the Legal Design Lab team will offer the policy lab class “AI for Legal Help”.

It is a 3-credit course, with course code LAW 809E.We will be working with community groups & justice institutions to interview members of the public about if & how they would use AI platforms (like ChatGPT) to deal with legal problems like evictions, debt collection, or domestic violence.

Our class client is the Legal Services Corporation’s TIG (Technology Initiative Grant) team.

The goal of the class is to develop a community-centered agenda about how to make these AI platforms more effective at helping people with these problems, while also identifying the key risks they pose to people & technical/policy strategies to mitigate these risks.

The class will be taught with user interviews, testing sessions, and multi-stakeholder workshops at the core – to have students synthesize diverse points of view into an agenda that can make AI tools more equitable, accessible, and responsible in the legal domain.

Explore the 809E AI for Legal Help course more here.

Class Report for AI for Legal Help, Autumn 2023

Read the Autumn Quarter class’ final report on their interview findings.

The students interviewed adults in the US about their possible use of AI to address legal problems. The students’ analysis of the interview results cover the findings and themes that emerged, distinct types of users, needs that future solutions and policies should address, and possible directions to improve AI platforms for people’s legal problem-solving needs.

After holding an interactive workshop with legal and policy stakeholders in December 2023, the class used these experts’ responses to improve their analysis and strengthen their conclusions.

Read the full report from the students here.

Network Events on AI & Access to Justice

Sign up for our AI+A2J Interest List

Are you a justice professional, academic, funder, technologist, or policymaker interested in the future of AI & the justice system? Please fill in the form embedded below (or at this link) to stay in touch. We’ll notify you about events, publications, opportunities, and more.

AI+A2J Events

The Stanford Legal Design Lab has been convening a series of workshops among key stakeholders who can design and develop new AI efforts to help people with legal problems: legal aid lawyers, court staff, judges, computer science researchers, tech developers, and community members.

conference workshop at LSC ITC

Generative AI & Justice workshop at LSC-ITC

Our Lab team collaborated with other AI-justice researchers to run a large workshop on how to use generative AI to increase access to justice at the Legal Services Corporation’s Innovations in Tech Conference.

Please see our write-up here.

JURIX ’23 AI & Access to Justice academic workshop

In December 2023, our Lab team is co-hosting an academic workshop on AI & Access to Justice at the JURIX Conference on Legal Knowledge and Information Systems.

There is an open call for submissions to the workshop. Submissions are due by November 12, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

Read more about the workshop & call for papers here.

AI + A2J User Research Workshop

In Autumn 2023, our AI for Legal Help class hosted an interactive workshop with representatives from technology companies, bar associations, universities, courts, legal aid groups, and tenants unions. The students presented their preliminary findings of their user research, received feedback from these various stakeholders, and then brainstormed in breakout groups about how to move forward with the user research findings, to have a better future for AI & Access to Justice.

Read more about the workshop & proposals here.

AI & Legal Help Crossover Workshop

In Summer 2023, an interdisciplinary group of researchers at Stanford hosted the “AI and Legal Help Crossover” event, for stakeholders from the civil justice system and computer science to meet, talk, and identify promising next steps to advance the responsible development of AI for improving the justice system.

Read more about the workshop & proposals here.

Stanford-SRLN Spring 2023 brainstorm session

In Spring 2023, the Stanford Legal Design Lab collaborated with the Self Represented Litigation Network to organize a stakeholder session on artificial intelligence (AI) and legal help within the justice system. We conducted a one-hour online session with justice system professionals from various backgrounds, including court staff, legal aid lawyers, civic technologists, government employees, and academics. The purpose of the session was to gather insights into how AI is already being used in the civil justice system, identify opportunities for improvement, and highlight potential risks and harms that need to be addressed. We documented the discussion with a digital whiteboard.

Read more about the session & the brainstorm of opportunities and risks.

Research on AI & Access to Justice

The Stanford Legal Design Lab has been researching what community members want from AI for justice problems, how AI systems perform on justice-related queries, and what opportunities there are to increase the quality of AI in helping people with their justice problems.

Towards Human-Centered Standards for Legal Help AI

Margaret D. Hagan, “Towards Human-Centered Standards for Legal Help AI.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. February 2024. Theme issue: ‘A complexity science approach to law and governance’. Ed. Daniel M. Katz, J. B. Ruhl and Pierpaolo Vivo. Available at https://royalsocietypublishing.org/doi/10.1098/rsta.2023.0157

As more groups consider how AI may be used in the legal sector, this paper envisions how companies and policymakers can prioritize the perspective of community members as they design AI and policies around it. It presents findings of structured interviews and design sessions with community members, in which they were asked about whether, how, and why they would use AI tools powered by large language models to respond to legal problems like receiving an eviction notice. The respondents reviewed options for simple versus complex interfaces for AI tools, and expressed how they would want to engage with an AI tool to resolve a legal problem. These empirical findings provide directions that can counterbalance legal domain experts’ proposals about the public interest around AI, as expressed by attorneys, court officials, advocates, and regulators. By hearing directly from community members about how they want to use AI for civil justice tasks, what risks concern them, and the value they would find in different kinds of AI tools, this research can ensure that people’s points of view are understood and prioritized, rather than only domain experts’ assertions about people’s needs and preferences around legal help AI.

LegalBench: A collaboratively built benchmark

Guha, Neel, Julian Nyarko, Daniel Ho, Christopher Ré, Adam Chilton, Alex Chohlas-Wood, Austin Peters, Margaret Hagan, et al. “Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models.” Advances in Neural Information Processing Systems 36 (2024). Available at https://arxiv.org/pdf/2308.11462.pdf

The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers find interesting. To enable cross-disciplinary conversations about LLMs in the law, we additionally show how popular legal frameworks for describing legal reasoning — which distinguish between its many forms — correspond to LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary. This paper describes LegalBench, presents an empirical evaluation of 20 open-source and commercial LLMs, and illustrates the types of research explorations LegalBench enables.

Good AI Legal Help, Bad AI Legal Help

Margaret D. Hagan. (2023). Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories. In JURIX AI and Access to Justice Workshop. Retrieved from https://drive.google.com/file/d/14CitzBksHiu_2x8W2eT-vBe_JNYOcOhM/view?usp=drive_link 

Abstract:

Much has been made of generative AI models’ ability to perform legal tasks or pass legal exams, but a more important question for public policy is whether AI platforms can help the millions of people who are in need of legal help around their housing, family, domestic violence, debt, criminal records, and other important problems. When a person comes to a well-known, general generative AI platform to ask about their legal problem, what is the quality of the platform’s response? Measuring quality is difficult in the legal domain, because there are few standardized sets of rubrics to judge things like the quality of a professional’s response to a person’s request for advice. This study presents a proposed set of 22 specific criteria to evaluate the quality of a system’s answers to a person’s request for legal help for a civil justice problem. It also presents the review of these evaluation criteria by legal domain experts like legal aid lawyers, courthouse self help center staff, and legal help website administrators. The result is a set of standards, context, and proposals that technologists and policymakers can use to evaluate quality of this specific legal help task in future benchmark efforts.

Opportunities & Risks for AI, Legal Help, and Access to Justice

Margaret D. Hagan (2023, June). Opportunities & Risks for AI, Legal Help, and Access to Justice. Legal Design and Innovation. Retrieved from https://medium.com/legal-design-and-innovation/opportunities-risks-for-ai-legal-help-and-access-to-justice-9c2faf8be393

Intro:

As more lawyers, court staff, and justice system professionals learn about the new wave of generative AI, there’s increasing discussion about how AI models & applications might help close the justice gap for people struggling with legal problems.

Could AI tools like ChatGPT, Bing Chat, and Google Bard help get more people crucial information about their rights & the law?

Could AI tools help people efficiently and affordably defend themselves against eviction or debt collection lawsuits? Could it help them fill in paperwork, create strong pleadings, prepare for court hearings, or negotiate good resolutions?

This report presents the initial proposals of the tasks, scenarios & use cases where AI could be helpful.

It also covers risks, harms, and worries brought up.

Finally, it lays out some key infrastructure proposals.

Evaluating the Quality of AI in the Legal Domain

Bommarito, M. J., & Katz, D. M. (2023). GPT Takes the Bar Exam. SSRN Electronic Journal, 1–7. https://doi.org/10.2139/ssrn.4314839 

Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. B. (2023). ChatGPT Goes to Law School. SSRN Electronic Journal, 1–16. https://doi.org/10.2139/ssrn.4335905 

Dahl, M., Magesh, V., Suzgun, M., & Ho, D. E. (2024). Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. Retrieved from http://arxiv.org/abs/2401.01301 

Deroy, A., Ghosh, K., & Ghosh, S. (2023). How Ready are Pre-trained Abstractive Models and LLMs for Legal Case Judgement Summarization? CEUR Workshop Proceedings, 3423, 8–19. Retrieved from https://arxiv.org/abs/2306.01248 

Fei, Z., Shen, X., Zhu, D., Zhou, F., Han, Z., Zhang, S., … Ge, J. (2023). LawBench: Benchmarking Legal Knowledge of Large Language Models, 1–38. Retrieved from http://arxiv.org/abs/2309.16289 

Guha, N., Ho, D. E., Nyarko, J., & Re, C. (2022). LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning. Stanford, CA. Retrieved from https://arxiv.org/abs/2209.06120 

Guha, N., Nyarko, J., Ho, D. E., Ré, C., Chilton, A., Narayana, A., … Li, Z. (2023). LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models, 1–143. https://doi.org/10.2139/ssrn.4583531 

Harden, S. (2023). The Results: Rating Generative AI Responses to Legal Questions. Retrieved November 6, 2023, from https://samharden.substack.com/p/the-results-rating-generative-ai?r=3c0pj&utm_campaign=post&utm_medium=web 

Henderson, P., Krass, M. S., Zheng, L., Manning, N. G. C. D., Jurafsky, D., & Ho, D. E. (2022). Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. Advances in Neural Information Processing Systems, 35(NeurIPS).  https://arxiv.org/abs/2207.00220 

Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P. (2023). GPT-4 Passes the Bar Exam. SSRN Electronic Journal, 1–35. https://doi.org/10.2139/ssrn.4389233

Savelka, J., Ashley, K. D., Gray, M. A., Westermann, H., & Xu, H. (2023). Explaining Legal Concepts with Augmented Large Language Models (GPT-4). Retrieved from http://arxiv.org/abs/2306.09525  

Tan, J., Westermann, H., & Benyekhlef, K. (2023). ChatGPT as an Artificial Lawyer? In CEUR Workshop Proceedings (Vol. 3435). Retrieved from https://ceur-ws.org/Vol-3435/short2.pdf 

AI-A2J Opportunities, Concerns, and Behavior

Stanford Policy Lab 809E Autumn Quarter. (2023). The Use and Application of Generative AI for Legal Assistance. Retrieved from https://docs.google.com/document/d/1bx_HXOMrgPjGjVDR21Vfekuk8_4ny0878430JKZ9-Sk/edit?usp=sharing 

Hagan, M. D. (2023, June). Opportunities & Risks for AI, Legal Help, and Access to Justice. Legal Design and Innovation. Retrieved from https://medium.com/legal-design-and-innovation/opportunities-risks-for-ai-legal-help-and-access-to-justice-9c2faf8be393 

Hagan, M. D. Towards Human-Centered Standards for Legal Help AI. Philosophical Transactions of the Royal Society A. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4582745 

Chien, C., Kim, M., Raj, A., & Rathish, R. (2024). How LLMs Can Help Address the Access to Justice Gap through the Courts. Berkeley, CA. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4683309 

Granat, R. (2023). ChatGPT, Access to Justice, and UPL. Retrieved June 19, 2023, from https://www.lawproductmakers.com/2023/03/chatgtp-access-to-justice-and-upl/ 

Guzman, H. (2023). AI’s “Hallucinations” Add to Risks of Widespread Adoption. Retrieved June 19, 2023, from https://www.law.com/corpcounsel/2023/03/23/ais-hallucinations-add-to-risks-of-widespread-adoption/?slreturn=20230519164801 

Holt, A. T. (2023). Legal AI-d to Your Service: Making Access to Justice a Reality. Vanderbilt Journal of Entertainment and Technology Law. Retrieved from https://www.vanderbilt.edu/jetlaw/2023/02/04/legal-ai-d-to-your-service-making-access-to-justice-a-reality/ 

Kanu, H. (2023, April). Artificial intelligence poised to hinder, not help, access to justice. Reuters. Retrieved from https://www.reuters.com/legal/transactional/artificial-intelligence-poised-hinder-not-help-access-justice-2023-04-25/ 

Pacheco, S. (2023, March). DoNotPay Lawsuits: A Setback for Justice Initiatives? Bloomberg Law. Retrieved from https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-donotpay-lawsuits-a-setback-for-justice-initiatives

Perlman, A. (2023). The Implications of ChatGPT for Legal Services and Society. The Practice. Cambridge, MA. Retrieved from https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/ 

Poppe, E. T. (2019). The Future Is ̶B̶r̶i̶g̶h̶t̶ Complicated: AI, Apps & Access to Justice. Oklahoma Law Review, 72(1). Retrieved from https://digitalcommons.law.ou.edu/olr/vol72/iss1/8 

Simshaw, D. (2022). Access to A.I. Justice: Avoiding an Inequitable Two-Tiered System of Legal Services. Yale Journal of Law & Technology, 24, 150–226. https://yjolt.org/access-ai-justice-avoiding-inequitable-two-tiered-system-legal-services 

Stepka, M. (2022, February). Law Bots: How AI Is Reshaping the Legal Profession. ABA Business Law Today. Retrieved from https://businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/ 

Telang, A. (2023). The Promise and Peril of AI Legal Services to Equalize Justice. Harvard Journal of Law & Technology. Retrieved from https://jolt.law.harvard.edu/digest/the-promise-and-peril-of-ai-legal-services-to-equalize-justice 

Tripp, A., Chavan, A., & Pyle, J. (2018). Case Studies for Legal Services Community Principles and Guidelines for Due Process and Ethics in the Age of AI. Retrieved from https://docs.google.com/document/d/1rEvg5xuOs_o1njPHHpF9jtuaGi0ren6DYUElBu0Fkfk/edit 

Verma, P., & Oremus, W. (2023, November). These lawyers used ChatGPT to save time. They got fired and fined. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2023/11/16/chatgpt-lawyer-fired-ai/

Westermann, Hannes and Karim Benyekhlef. JusticeBot: A Methodology for Building Augmented Intelligence Tools for Laypeople to Increase Access to Justice. ICAIL 2023. https://arxiv.org/pdf/2308.02032.pdf , https://www.cyberjustice.ca/en/logiciels-cyberjustice/nos-solutions-logicielles/justicebot/ 

Wilkins, S. (2023, February). DoNotPay’s Downfall Put a Harsh Spotlight on AI and Justice Tech. Now What? Legaltech News. Retrieved from https://www.law.com/legaltechnews/2023/02/10/donotpays-downfall-put-a-harsh-spotlight-on-ai-and-justice-tech-now-what/ 

Evaluating AI’s Performance & Harms, beyond law

Agrawal, A., Suzgun, M., Mackey, L., & Kalai, A. T. (2023). Do Language Models Know When They’re Hallucinating References?, 1–18. Retrieved from http://arxiv.org/abs/2305.18248 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021 – Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Bickmore, T. W., Trinh, H., Olafsson, S., O’Leary, T. K., Asadi, R., Rickles, N. M., & Cruz, R. (2018). Patient and consumer safety risks when using conversational assistants for medical information: An observational study of siri, alexa, and google assistant. Journal of Medical Internet Research, 20(9). https://doi.org/10.2196/11510 

Bommasani, R., Liang, P., & Lee, T. (2023). Holistic Evaluation of Language Models. Stanford, CA. https://doi.org/10.1111/nyas.15007 

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … Liang, P. (2021). On the Opportunities and Risks of Foundation Models, 1–214. Retrieved from http://arxiv.org/abs/2108.07258 

Jones, E., & Steinhardt, J. (2022). Capturing Failures of Large Language Models via Human Cognitive Biases. Advances in Neural Information Processing Systems, 35(NeurIPS), 1–22.-Shuster, K., Poff, S., Chen, M., Kiela, D., & Weston, J. (2021). Retrieval Augmentation Reduces Hallucination in Conversation. Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021, 3784–3803. https://doi.org/10.18653/v1/2021.findings-emnlp.320

Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., … Kaplan, J. (2022). Language Models (Mostly) Know What They Know. Retrieved from http://arxiv.org/abs/2207.05221 

Mündler, N., He, J., Jenko, S., & Vechev, M. (2023). Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation, 1–26. Retrieved from http://arxiv.org/abs/2305.15852 

Nakao, Y., Strappelli, L., Stumpf, S., Naseer, A., Regoli, D., & Gamba, G. Del. (2023). Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness. International Journal of Human-Computer Interaction, 39(9), 1762–1788. https://doi.org/10.1080/10447318.2022.2067936

Peng, B., Galley, M., He, P., Cheng, H., Xie, Y., Hu, Y., … Gao, J. (2023). Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback. Retrieved from http://arxiv.org/abs/2302.12813 

Tian, K., Mitchell, E., Yao, H., Manning, C. D., & Finn, C. (2023). Fine-tuning Language Models for Factuality, 1–16. Retrieved from http://arxiv.org/abs/2311.08401 

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P. Sen, Mellor, J., … Gabriel, I. (2022). Taxonomy of Risks posed by Language Models. In ACM International Conference Proceeding Series (Vol. 22, pp. 214–229). ACM. https://doi.org/10.1145/3531146.3533088

Strategies to Improve Safe Use of AI

Argo, J. J., & Main, K. J. (2004). Meta-analyses of the effectiveness of warning labels. Journal of Public Policy and Marketing, 23(2), 193–208. https://doi.org/10.1509/jppm.23.2.193.51400

Ayres, I., & Schwartz, A. (2014). The no-reading problem in consumer contract law. Stanford Law Review, 66(3), 545–610. https://www.stanfordlawreview.org/wp-content/uploads/sites/3/2014/03/66_Stan_L_Rev_545_AyresSchwartz.pdf

Ben-Shahar, O., & Chilton, A. (2016). Simplification of privacy disclosures: An experimental test. Journal of Legal Studies, 45(S2), S41–S67. https://doi.org/10.1086/688405

Calo, M. R. (2013). Against Notice Skepticism in Privacy (and Elsewhere). Notre Dame L. Rev, 87(1027). Retrieved from http://scholarship.law.nd.edu/ndlr%5Cnhttp://scholarship.law.nd.edu/ndlr/vol87/iss3/3

Kelley, P. G., Bresee, J., Cranor, L. F., & Reeder, R. W. (2009). A “nutrition label” for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security – SOUPS ’09 (p. 1). https://doi.org/10.1145/1572532.1572538

Hagan, M. (2016). Designing 21st-Century Disclosures for Financial Decision Making. Stanford, CA. Retrieved from https://law.stanford.edu/publications/designing-21st-century-disclosures-for-financial-decision-making/

Martel, C., & Rand, D. G. (2023). Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology, 54, 101710. https://doi.org/10.1016/j.copsyc.2023.101710

Robinson, L. A., Viscusi, W. K., & Zeckhauser, R. (2016, November). Consumer Warning Labels Aren’t Working. Harvard Business Review. Retrieved from https://hbr.org/2016/11/consumer-warning-labels-arent-working

Schaub, F., Balebako, R., Durity, A. L., & Cranor, L. F. (2018). A Design Space for Effective Privacy Notices. In The Cambridge Handbook of Consumer Privacy (pp. 365–393). https://doi.org/10.1017/9781316831960.021