FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

U.S. ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE by NIST

An evaluation copy of the Artificial Intelligence Safety Institute Consortium Cooperative Research and Development Agreement (CRADA) is now available (read here). NIST will continue to accept Letters of Interest (LOI) from organizations interested in participating the US AI Safety Institute Consortium until January 15, 2024. Letters received after the deadline may not be considered. A list of the Consortium’s initial working groups are also available below

In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST is establishing the U.S. Artificial Intelligence Safety Institute (USAISI) and a related Consortium (“Consortium”). The Consortium will help equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.

NIST is inviting organizations to provide letters of interest describing technical expertise, products, data, and/or models to support and demonstrate pathways to enable safe and trustworthy AI systems. Interested organizations should complete a Letter of Interest (LOI) by January 15, 2024.

NIST hosted a workshop on November 17, 2023, to engage in a conversation about artificial intelligence (AI) safety. The hybrid workshop attracted 145 in-person and over 780 online participants. View recording, agenda, and slides.

As a result of the workshop, NIST has an initial list of working groups for organizations to participate in:

  • Working Group #1: Risk Management for Generative AI
  • Working Group #2: Synthetic Content
  • Working Group #3: Capability Evaluations
  • Working Group #4: Red-Teaming
  • Working Group #5: Safety & Security

Throughout these activities, we anticipate that societal and technological considerations will be incorporated throughout the working group activities to ensure safe and trustworthy AI systems can be developed effectively.

Artificial Intelligence Safety Institute Consortium

Overview

Building upon its long track record of working with the private and public sectors and its history of reliable and practical measurement and standards-oriented solutions, NIST seeks research collaborators who can support this vital undertaking. Specifically, NIST looks to:

  • Create a convening space for collaborators to have an informed dialogue and enable sharing of information and knowledge
  • Engage in collaborative research and development through shared projects
  • Enable assessment and evaluation of test systems and prototypes to inform future AI measurement efforts

To create a lasting approach for continued joint research and development, NIST will engage stakeholders via this consortium. The work of the consortium will be open and transparent and provide a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible AI.

Consortium members will be expected to contribute technical expertise in one or more of the following areas: ​ 

  • ​​​Data and data documentation ​​
  • ​​​AI Metrology ​​
  • ​​​AI Governance ​​
  • ​​​AI Safety
  • ​Trustworthy AI ​​
  • ​​​Responsible AI ​​
  • ​​​AI system design and development ​​
  • ​​​AI system deployment
  • ​AI Red Teaming​​
  • ​​​Human-AI Teaming and Interaction ​​
  • ​​​Test, Evaluation, Validation and Verification methodologies ​​
  • ​​​Socio-technical methodologies ​​
  • ​​​AI Fairness  ​​
  • ​​​AI Explainability and Interpretability ​​
  • ​​​Workforce skills  ​​
  • ​​​Psychometrics ​​
  • ​​​Economic analysis
  • Models, data and/or products to support and demonstrate pathways to enable safe and trustworthy artificial intelligence (AI) systems through the NIST AI Risk Management Framework
  • Infrastructure support for consortium projects
  • Facility space and hosting consortium researchers, webinars, workshops and conferences, and online meetings

Consortium members contributions should support one of the following areas:

  1. Develop new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways
  2. Develop guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm
  3. Develop approaches to incorporate secure-development practices for generative AI, including special considerations for dual-use foundation models, including
    • Guidance related to assessing and managing the safety, security, and trustworthiness of models and related to privacy-preserving machine learning;
    • Guidance to ensure the availability of testing environments
  4. Develop and ensure the availability of testing environments
  5. Develop guidance, methods, skills and practices for successful red-teaming and privacy-preserving machine learning
  6. Develop guidance and tools for authenticating digital content
  7. Develop guidance and criteria for AI workforce skills, including risk identification and management, test, evaluation, validation, and verification (TEVV), and domain-specific expertise
  8. Explore the complexities at the intersection of society and technology, including the science of how humans make sense of and engage with AI in different contexts
  9. Develop guidance for understanding and managing the interdependencies between and among AI actors along the lifecycle

How to Get Involved 

Interested organizations with relevant technical capabilities should complete a Letter of Interest (LOI) by January 15, 2024. 

Learn more

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.