Very important GUIDANCE from 23 Governments in cooperation with 19 AI expert organizations!

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

GOV UK. GUIDANCE. Guidelines for secure AI system development.

Guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others.

Executive summary (Page 1 of 9)

This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, developmentdeployment and operation of their AI systems.


About the guidelines

AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way.

AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

For this reason, the guidelines are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance. For each section, we suggest considerations and mitigations that will help reduce the overall risk to an organisational AI system development process.

Secure design

This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.

Secure development

This section contains guidelines that apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.

Secure deployment

This section contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.

Secure operation and maintenance

This section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.

The guidelines follow a ‘secure by default’ approach, and are aligned closely to practices defined in the NCSC’s Secure development and deployment guidance, NIST’s Secure Software Development Framework, and ‘secure by design principles’ published by CISA, the NCSC and international cyber agencies. They prioritise:

  • taking ownership of security outcomes for customers
  • embracing radical transparency and accountability
  • building organisational structure and leadership so secure by design is a top business priority

DOWNLOAD GUIDANCE

This document is published by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and the following international partners:

  •    National Security Agency (NSA)
  •    Federal Bureau of Investigation (FBI)
  •    Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  •    Canadian Centre for Cyber Security (CCCS)
  •    New Zealand National Cyber Security Centre (NCSC-NZ)
  •    Chile’s Government CSIRT
  •    National Cyber and Information Security Agency of the Czech Republic (NUKIB)
  •    Information System Authority of Estonia (RIA)
  •    National Cyber Security Centre of Estonia (NCSC-EE)
  •    French Cybersecurity Agency (ANSSI)
  •    Germany’s Federal Office for Information Security (BSI)
  •    Israeli National Cyber Directorate (INCD)
  •    Italian National Cybersecurity Agency (ACN)
  •    Japan’s National center of Incident readiness and Strategy for Cybersecurity (NISC)
  •    Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  •    Nigeria’s National Information Technology Development Agency (NITDA)
  •    Norwegian National Cyber Security Centre (NCSC-NO)
  •    Poland Ministry of Digital Affairs
  •    Poland’s NASK National Research Institute (NASK)
  •    Republic of Korea National Intelligence Service (NIS)
  •    Cyber Security Agency of Singapore (CSA)

I cooperation with:

  •    Alan Turing Institute
  •    Amazon
  •    Anthropic
  •    Databricks
  •    Georgetown University’s Center for Security and Emerging Technology
  •    Google
  •    Google DeepMind
  •    Hugging Face
  •    IBM
  •    Imbue
  •    Inflection
  •    Microsoft
  •    OpenAI
  •    Palantir
  •    RAND
  •    Scale AI
  •    Software Engineering Institute at Carnegie Mellon University
  •    Stanford Center for AI Safety
  •    Stanford Program on Geopolitics, Technology and Governance



FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.