FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Open Letter. MIT. A Safe Harbor for Independent AI Evaluation

We propose that AI companies make simple policy changes to protect good faith research on their models, and promote safety, security, and trustworthiness of AI systems. We, the undersigned, represent members of the AI, legal, and policy communities with diverse expertise and interests. We agree on three things:

    1. Independent evaluation is necessary for public awareness, transparency, and accountability of high impact generative AI systems.Hundreds of millions of people have used generative AI in the last two years. It promises immense benefits, but also serious risks related to bias, alleged copyright infringement, and non-consensual intimate imagery. AI companies, academic researchers, and civil society agree that generative AI systems pose notable risks and that independent evaluation of these risks is an essential form of accountability.
    1. Currently, AI companies’ policies can chill independent evaluation.While companies’ terms of service deter malicious use, they also offer no exemption for independent good faith research, leaving researchers at risk of account suspension or even legal reprisal. Whereas security research on traditional software has established voluntary protections from companies (“safe harbors”), clear norms from vulnerability disclosure policies, and legal protections from the DOJ, trustworthiness and safety research on AI systems has few such protections. Independent evaluators fear account suspension (without an opportunity for appeal) and legal risks, both of which can have chilling effects on research. While some AI companies now offer researcher access programs, which we applaud, the structure of these programs allows companies to select their own evaluators. This is complementary, rather than a substitute, for the full range of diverse evaluations that might otherwise take place independently.
    2. AI companies should provide basic protections and more equitable access for good faith AI safety and trustworthiness research.Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable, with the threat of legal action, cease-and-desist letters, or other methods to impose chilling effects on research. In some cases, generative AI companies have already suspended researcher accounts and even changed their terms of service to deter some types of evaluation (discussed here). Disempowering independent researchers is not in AI companies’ own interests. To help protect users, we encourage AI companies to provide two levels of protection to research.
    1. First, a legal safe harbor would indemnify good faith independent AI safety, security, and trustworthiness research, provided it is conducted in accordance with well-established vulnerability disclosure rules.
    2. Second, companies should commit to more equitable access, by using independent reviewers to moderate researchers’ evaluation applications, which would protect rule-abiding safety research from counterproductive account suspensions, and mitigate the concern of companies selecting their own evaluators.

While these basic commitments will not solve every issue surrounding responsible AI today, it is an important first step on the long road towards building and evaluating AI in the public interest.

Additional reading on these ideas: a safe harbor for AI evaluation (by letter authors), algorithmic bug bounties, and credible third-party audits. (Signatures are for this letter, not the further reading.)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.