Categories
AI + Access to Justice Current Projects

AI Platforms & Privacy Protection through Legal Design

How can regulators, researchers, and tech companies proactively protect people’s rights & privacy, even as AI becomes more ubiquitous so quickly?

by Margaret Hagan, originally published at Legal Design & Innovation

This past week, I had the privilege of attending the State of Privacy event in Rome, with policy, technical, and research leaders from Italy and Europe.

I was at a table focused on the intersection of Legal Design, AI platforms, and privacy protections.

Our multidisciplinary group spent several hours getting concrete: what are the scenarios and user needs around privacy & AI platforms? What are the main concerns and design challenges?

We then moved towards an initial brainstorm. What ideas for interventions, infrastructure, or processes could help move AI platforms towards greater privacy protections — and avoid privacy problems that have arisen in similar technology platform advancements in the recent past? What could we learn from privacy challenges, solutions, and failures that came with the rise of websites on the open Internet, the advancement of search engines, and the popular use of social media platforms?

Our group circled around some promising, exciting ideas for cross-Atlantic collaboration. Here is a short recap of them.

Learning from the User Burdens of Privacy Pop-ups & Cookie Banners

Can we avoid putting so many burdens on the user, like with cookie banners and privacy pop-ups on every website? We can learn from the current crop of privacy protections, which warn European visitors when they open any new website and require them to read, choose, and click through pop-up menus about cookies, privacy, and more. What are ways that we can lower these user burdens and privacy burn-out interfaces?

Smart AI privacy warnings, woven into interactions

Can the AI be smart enough to respond with warnings when people are crossing into a high-risk area? Perhaps instead of generalized warnings about privacy implications — a conversational AI agent can let a person know when they are sharing data/asking for information that has a higher risk of harm. This might be when a person asks a question about their health, finances, personal security, divorce/custody, domestic violence, or another topic that could have damaging consequences to them if others (their family members, financial institutions, law enforcement, insurance companies, or other third parties) found out. The AI could be programmed to be privacy-protective, to easily let a person choose at the moment about whether to take the risk of sharing this sensitive data, to help a person understand the risks in this specific domain, and to help the person delete or manage their privacy for this particular interaction.

Choosing the Right Moment for Privacy Warnings & Choices

Can warnings and choices around privacy come during the ‘right moment’? Perhaps it’s not best to warn people before they sign up for a service, or even right when they are logging on. This is typically when people are most hungry for AI interaction & information. They don’t want to be distracted. Rather, can the warning, choices, and settings come during the interactions — or after it? A user is likely to have ‘buyer’s remorse’ with AI platforms: did I overshare? Who can see what I just shared? Could someone find out what I talked about with the AI? How can privacy terms & controls be easily accessible right when people need it, usually during these “clean up” moments?

Conducting More Varied User Research about AI & Protections

We need more user research in different cultures and demographics about how people use AI, relate to it, and critique it (or do not). To figure out how to develop privacy protections, warning/disclosure designs, and other techno-policy solutions, first we need a deeper understanding of various AI users, their needs and preferences, and their willingness to engage with different kinds of protections.

Building an International Network Working on AI & Privacy Protections

Could we have anchor universities, with strong computer science, policy, and law departments, that host workshops and training on the ethical development of AI platforms? These could help bring future technology leaders into cross-disciplinary contact with people from policy and law, to learn about social good matters like privacy. These cross-disciplinary groups could also help policy & law experts learn how to integrate their principles and research into more technical form, like by developing labeled datasets and model benchmarks.—Are you interested in ensuring there is privacy built into AI platforms? Are you working on user, technical, or policy research on what the privacy scenarios, needs, risks, and solutions might be on AI platforms? Please be in touch!Thank you to Dr. Monica Palmirani for leading the Legal Design group at the State of Privacy event, at the lovely Museo Nazionale Etrusco di Villa Giulia in Rome.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.