Categories
AI + Access to Justice Current Projects

3 Shifts for AI in the Justice System: LSC 50th Anniversary presentation

In mid-April, Margaret Hagan presented on the Lab’s research and development efforts around AI and access to justice at the Legal Services Corporation 50th anniversary forum. This large gathering of legal aid executive directors, national justice leaders, members of Congress, philanthropists, and corporate leaders celebrated the work of LSC & profiled future directions of legal services.

Margaret was on a panel along with legal aid leader Sateesh Nori, Suffolk Law School Dean Andy Perlman, and former LSC president James Sandman.

She presented 3 big takeaways for the audience, about how to approach if and how AI should be used to close on the justice gap — especially to move beyond gut reactions & anecdotes that tend towards too much optimism or skepticism. Based on the lab’s research and design activities she proposed 3 big shifts for civil justice leaders towards generative AI.

Shift 1: Towards Techno-Realism

This shift away from hardline camps about too much optimism or pessimism about AI’s potential futures can lead us to more empirical, detailed work. Where are the specific tasks where AI can be helpful? Can we demonstrate with lab studies and controlled pilots exactly if AI can perform better than humans at these specific tasks — with equal or higher quality, and efficiency? This move towards applied research can lead towards more responsible innovation, rather than rushing towards AI applications too quickly or chilling the innovation space pre-emptively.

Shift 2: From Reactive to Proactive leadership

The second shift is how lawyers and justice professionals approach the world of AI. Will they be reactive to what technologists put out to the public, trying to create the right mix of norms, lawsuits, and regulations that can try to push AI towards being safe enough, and also quality enough for legal use cases?

Instead, they can be proactive. They can be running R&D cohorts to see what AI is good at, what risks and harms emerge in these test applications, and then work with AI companies and regulators to better encourage the AI strengths and mitigate its risks. This means joining together with technologists (especially those at universities and benefit corporations) to do hands-on, exploratory demonstration project development to better inform investments, regulation, and other policy-making on AI for justice use cases.

Shift 3: Local Pilots to Coordinated Network

The final shift is about how innovators work. Legal aid groups or court staff could launch AI pilots on their own, building out a new application or bot for their local jurisdiction, and then share it at upcoming conferences to let others know about it. Or, from the beginning, they could be crafting their technical system, UX design, vendor relationships, data management, and safety evaluations in concert with others around the country who are working on similar efforts. Even if the ultimate application is run and managed locally, much of the infrastructure can be shared in national cohorts. These national cohorts can also help gather data, experiences, risk/harm incidents, and other important information that can help guide task forces, attorneys general, tech companies, and others setting the policies for legal help AI in the future.

See more of the presentation in the slides below.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.