For this database, we’re looking for specific examples of where AI platforms (like ChatGPT, Bard, Bing Chat, etc) provide problematic responses, like:
- incorrect information about legal rights, rules, jurisdiction, forms, or organizations;
- hallucinations of cases, statutes, organizations, hotlines, or other important legal information;
- irrelevant, distracting, or off-topic information;
- misrepresentation of the law;
- overly simplified information, that loses key nuance or cautions;
- otherwise doing something that might be harmful to a person trying to get legal help.
You can send in any incidents you’ve experienced here at this form: https://airtable.com/apprz5bA7ObnwXEAd/shrQoNPeC7iVMxphp
We will be reviewing submissions & making this incident database available in the future, for those interested.