Categories
AI + Access to Justice Current Projects

Schedule for AI & A2J Jurix workshop

Our organizing committee was pleased to receive many excellent submissions for the AI & A2J Workshop at Jurix on December 18, 2023. We were able to select half of the submissions for acceptance, and we extended the half-day workshop to be a full-day workshop to accommodate the number of submissions.

We are pleased to announce our final schedule for the workshop:

Schedule for the AI & A2J Workshop

Morning Sessions

Welcome Kickoff, 09:00-09:15

Conference organizers welcome everyone, lead introductions, and review the day’s plan.

1: AI-A2J in Practice, 09:15-10:30 AM 

09:15-09:30: Juan David Gutierrez: AI technologies in the judiciary: Critical appraisal of LLMs in judicial decision making

09:30-09:45: Ransom Wydner, Sateesh Nori, Eliza Hong, Sam Flynn, and Ali Cook: AI in Access to Justice: Coalition-Building as Key to Practical and Sustainable Applications

09:45-10:00: Mariana Raquel Mendoza Benza: Insufficient transparency in the use of AI in the judiciary of Peru and Colombia: A challenge to digital transformation

10:00-10:15: Vanja Skoric, Giovanni Sileno, and Sennay Ghebreab: Leveraging public procurement for LLMs in the public sector: Enhancing access to justice responsibly

10:15-10:30: Soumya Kandukuri: Building the AI Flywheel in the American Judiciary

Break: 10:30-11:00 

2: AI for A2J Advice, Issue-Spotting, and Engagement Tasks, 11:00-12:30 

11:00: Opening remarks to the session

11:05-11:20: Sam Harden: Rating the Responses to Legal Questions by Generative AI Models

11:20-11:35: Margaret Hagan: Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for responses to people’s legal problem stories

11:35-11:50: Nick Goodson and Rongfei Lui: Intention and Context Elicitation with Large Language Models in the Legal Aid Intake Process

11:50-12:05: Nina Toivonen, Marika Salo-Lahti, Mikko Ranta, and Helena Haapio, Beyond Debt: The Intersection of Justice, Financial Wellbeing and AI

12:05-12:15: Amit Haim: Large Language Models and Legal Advice12:15-12:30: General Discussions, Takeaways, and Next Steps on AI for Advice

Break: 12:30-13:30

Afternoon Sessions

3: AI for Forms, Contracts &  Dispute Resolution, 13:30-15:00 

13:30: Opening remarks to this session13:35-13:50: Quinten Steenhuis, David Colarusso, and Bryce Wiley: Weaving Pathways for Justice with GPT: LLM-driven automated drafting of interactive legal applications

13:50-14:05: Katie Atkinson, David Bareham, Trevor Bench-Capon, Jon Collenette, and Jack Mumford: Tackling the Backlog: Support for Completing and Validating Forms

14:05-14:20: Anne Ketola, Helena Haapio, and Robert de Rooy: Chattable Contracts: AI Driven Access to Justice

14:20-14:30: Nishat Hyder-Rahman and Marco Giacalone: The role of generative AI in increasing access to justice in family (patrimonial) law

14:30-15:00: General Discussions, Takeaways, and Next Steps on AI for Forms & Dispute Resolution

Break: 15:00-15:30

4:  AI-A2J Technical Developments, 15:30-16:30

15:30: welcome to session
15:35-15:50: Marco Billi, Alessandro Parenti, Giuseppe Pisano, and Marco Sanchi: A hybrid approach of accessible legal reasoning through large language models
15:50-16:05: Bartosz Krupa – Polish BERT legal language model
16:05-16:20: Jakub Dråpal – Understanding Criminal Courts
16:20-16:30: General Discussion on Technical Developments in AI & A2J

Closing Discussion: 16:30-17:00

What are the connections between the sessions? What next steps do participants think will be useful? What new research questions and efforts might emerge from today?

Categories
AI + Access to Justice Current Projects

Report a problem you’ve found with AI & legal help

The Legal Design Lab is compiling a database of “AI & Legal Help problem incidents”. Please contribute to this database by entering in information on this form, that feeds into the database.

We will be making this database available in the near-future, as we collect more records & review them.

For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:

  • incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
  • hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
  • irrelevant, distracting, or off-topic information;
  • misrepresentation of the law;
  • overly simplified information, that loses key nuance or cautions;
  • otherwise doing something that might be harmful to a person trying to get legal help.

You can send in any incidents you’ve experienced here at this form: https://airtable.com/apprz5bA7ObnwXEAd/shrQoNPeC7iVMxphp 

We will be reviewing submissions & making this incident database available in the future, for those interested.

Fill in the form to report an AI-Justice problem incident

Categories
AI + Access to Justice Current Projects

Call for papers to the JURIX workshop on AI & Access to Justice

At the December 2023 JURIX conference on Legal Knowledge and Information Systems, there is an academic workshop on AI and Access to Justice.

There is an open call for submissions to the workshop. There is an extension to the deadline, which is now November 20, 2023. We encourage academics, practitioners, and others interested in the field to submit a paper for the workshop or consider attending.

The workshop will be on December 18, 2023 in Maastricht, Netherlands (with possible hybrid participation available).

See more about the conference at the main JURIX 23 website.

About the AI & A2J workshop

This workshop will bring together lawyers, computer scientists, and social science researchers to discuss their findings and proposals around how AI might be used to improve access to justice, as well as how to hold AI models accountable for the public good.

Why this workshop? As more of the public learns about AI, there is the potential that more people will use AI tools to understand their legal problems, seek assistance, and navigate the justice system. There is also more interest (and suspicion) by justice professionals about how large language models might affect services, efficiency, and outreach around legal help. The workshop will be an opportunity for an interdisciplinary group of researchers to shape a research agenda, establish partnerships, and share early findings about what opportunities and risks exist in the AI/Access to Justice domain — and how new efforts and research might contribute to improving the justice system through technology.

What is Access to Justice? Access to justice (A2J) goals center around making the civil justice system more equitable, accessible, empowering, and responsive for people who are struggling with issues around housing, family, workplace, money, and personal security. Specific A2J goals may include increasing people’s legal capability and understanding; their ability to navigate formal and informal justice processes; their ability to do legal tasks around paperwork, prediction, decision-making, and argumentation; and justice professionals’ ability to understand and reform the system to be more equitable, accessible, and responsive. How might AI contribute to these goals? And what are the risks when AI is more involved in the civil justice system?

At the JURIX AI & Access to Justice Workshop, we will explore new ideas, research efforts, frameworks, and proposals on these topics. By the end of the workshop, participants will be able to:

  • Identify the key challenges and opportunities for using AI to improve access to justice.
  • Identify the key challenges and opportunities of building new data sets, benchmarks, and research infrastructure for AI for access to justice.
  • Discuss the ethical and legal implications of using AI in the legal system, particularly for tasks related to people who cannot afford full legal representation.
  • Develop proposals for how to hold AI models accountable for the public good.

Format of the Workshop: The workshop will be conducted in a hybrid form and will consist of a mix of presentations, panel discussions, and breakout sessions. It will be a half-day session. Participants will have the opportunity to share their own work and learn from the expertise of others.

Organizers of the Workshop: Margaret Hagan (Stanford Legal Design Lab), Nora al-Haider (Stanford Legal Design Lab), Hannes Westermann (University of Montreal), Jaromir Savelka (Carnegie Mellon University), Quinten Steenhuis (Suffolk LIT Lab).

Are you generally interested in AI & Access to Justice? Sign up for our Stanford Legal Design Lab AI-A2J interest list to stay in touch.

Submit a paper to the AI & A2J Workshop

We welcome submissions of 4-12 pages (using the IOS formatting guidelines). A selection will be made on the basis of workshop-level reviewing focusing on overall quality, relevance, and diversity.

Workshop submissions may be about the topics described above, including:

  • findings of research about how AI is affecting access to justice,
  • evaluation of AI models and tools intended to benefit access to justice,
  • outcomes of new interventions intended to deploy AI for access to justice,
  • proposals of future work to use AI or hold AI initiatives accountable,
  • principles & frameworks to guide work in this area, or
  • other topics related to AI & access to justice

Deadline extended to November 20, 2023

Submission Link: Submit your 4-12 page paper here: https://easychair.org/my/conference?conf=jurixaia2j

Notification: November 28, 2023

Workshop: December 18, 2023 (with the possibility of hybrid participation) in Maastricht, Netherlands

More about the JURIX Conference

The Foundation for Legal Knowledge Based Systems (JURIX) is an organization of researchers in the field of Law and Computer Science in the Netherlands and Flanders. Since 1988, JURIX has held annual international conferences on Legal Knowledge and Information Systems.

This year, JURIX conference on Legal Knowledge and Information Systems will be hosted in Maastricht, the Netherlands. It will take place on December 18-20, 2023.

The proceedings of the conferences will be published in the Frontiers of Artificial Intelligence and Applications series of IOS Press. JURIX follows the Golden Standard and provides one of the best dissemination platforms in AI & law.


Categories
AI + Access to Justice Current Projects

AI Platforms & Privacy Protection through Legal Design

How can regulators, researchers, and tech companies proactively protect people’s rights & privacy, even as AI becomes more ubiquitous so quickly?

by Margaret Hagan, originally published at Legal Design & Innovation

This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe.

I was at a table focused on the intersection of Legal Design, AI platforms, and privacy protections.

Our multidisciplinary group spent several hours getting concrete: what are the scenarios and user needs around privacy & AI platforms? What are the main concerns and design challenges?

We then moved towards an initial brainstorm. What ideas for interventions, infrastructure, or processes could help move AI platforms towards greater privacy protections — and avoid privacy problems that have arisen in similar technology platform advancements in the recent past? What could we learn from privacy challenges, solutions, and failures that came with the rise of websites on the open Internet, the advancement of search engines, and the popular use of social media platforms?

Our group circled around some promising, exciting ideas for cross-Atlantic collaboration. Here is a short recap of them.

Learning from the User Burdens of Privacy Pop-ups & Cookie Banners

Can we avoid putting so many burdens on the user, like with cookie banners and privacy pop-ups on every website? We can learn from the current crop of privacy protections, which warn European visitors when they open any new website and require them to read, choose, and click through pop-up menus about cookies, privacy, and more. What are ways that we can lower these user burdens and privacy burn-out interfaces?

Smart AI privacy warnings, woven into interactions

Can the AI be smart enough to respond with warnings when people are crossing into a high-risk area? Perhaps instead of generalized warnings about privacy implications — a conversational AI agent can let a person know when they are sharing data/asking for information that has a higher risk of harm. This might be when a person asks a question about their health, finances, personal security, divorce/custody, domestic violence, or another topic that could have damaging consequences to them if others (their family members, financial institutions, law enforcement, insurance companies, or other third parties) found out. The AI could be programmed to be privacy-protective, to easily let a person choose at the moment about whether to take the risk of sharing this sensitive data, to help a person understand the risks in this specific domain, and to help the person delete or manage their privacy for this particular interaction.

Choosing the Right Moment for Privacy Warnings & Choices

Can warnings and choices around privacy come during the ‘right moment’? Perhaps it’s not best to warn people before they sign up for a service, or even right when they are logging on. This is typically when people are most hungry for AI interaction & information. They don’t want to be distracted. Rather, can the warning, choices, and settings come during the interactions — or after it? A user is likely to have ‘buyer’s remorse’ with AI platforms: did I overshare? Who can see what I just shared? Could someone find out what I talked about with the AI? How can privacy terms & controls be easily accessible right when people need it, usually during these “clean up” moments?

Conducting More Varied User Research about AI & Protections

We need more user research in different cultures and demographics about how people use AI, relate to it, and critique it (or do not). To figure out how to develop privacy protections, warning/disclosure designs, and other techno-policy solutions, first we need a deeper understanding of various AI users, their needs and preferences, and their willingness to engage with different kinds of protections.

Building an International Network Working on AI & Privacy Protections

Could we have anchor universities, with strong computer science, policy, and law departments, that host workshops and training on the ethical development of AI platforms? These could help bring future technology leaders into cross-disciplinary contact with people from policy and law, to learn about social good matters like privacy. These cross-disciplinary groups could also help policy & law experts learn how to integrate their principles and research into more technical form, like by developing labeled datasets and model benchmarks.—Are you interested in ensuring there is privacy built into AI platforms? Are you working on user, technical, or policy research on what the privacy scenarios, needs, risks, and solutions might be on AI platforms? Please be in touch!Thank you to Dr. Monica Palmirani for leading the Legal Design group at the State of Privacy event, at the lovely Museo Nazionale Etrusco di Villa Giulia in Rome.

Categories
Work Product Tool

DocuBot for filling in forms through SMS


DocuBot is a tool to fill in legal documents and other forms through an SMS or other chatbot-like experience. The bot asks questions to fill in the form.

Here is more information from its creator, 1Law.

1LAW is proud to announce the creation of Docubot™, a legal document generating artificial intelligence. In conjunction with some of the best lawyers in the United States, Docubot is drawing on form databases of 1000’s of legal documents. Docubot will assist individuals with legal queries as well as generate documents for them. To help serve Legal Aid, Docubot will allow users to interact via SMS text.

Tech specs:

Written in Go at the server
Powered by Watson – Watson rest API
Swift on the iOS side
Communication via Websocket protocol
Back and forth handled through Websocket

Output –Everything is encrypted

The document is generated using a headless webkit browser that takes an HTML document and outputs a .pdf which is stored in a private S3 bucket and then a short-lived url is generated and sent to a user and each time a user loads the thread they will be given a new url. Document is backed up on the S3 server.

Contact us at: info@www.1law.com for more information.