FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Overview
AI is one of the most powerful technologies of our time and presents a significant opportunity for innovation to advance U.S. national security. Such innovation must be responsible, lawful, and align with democratic values, including human rights, civil rights, civil liberties, privacy, and safety. The performance of AI systems shall be such that U.S. Government personnel can have trust and confidence in using AI systems, and the use of those systems should not undermine the public’s faith in U.S. national security institutions. The U.S. Government shall use AI systems and employ force informed by AI systems in a manner that complies with all applicable law and policy, including obligations under International Humanitarian Law, Human Rights Law, and the U.S. Government’s existing frameworks for the responsible use of AI.

Scope
The Framework to Advance AI Governance and Risk Management in National Security (“AI Framework”) builds on and fulfills the requirements found in Section 4.2 of the National Security Memorandum on Advancing the United States’ Leadership in AI, Harnessing AI to Fulfill National Security Objectives, and Fostering the Safety, Security, and Trustworthiness of AI (“AI NSM”), which directs designated Department Heads to issue guidance to their respective components/sub-agencies to advance governance and risk management practices regarding the use of AI as a component of a National Security System (NSS).1,2 This AI Framework is intended to support and enable the U.S. Government to continue taking active steps to uphold human rights, civil rights, civil liberties, privacy, and safety; ensure that AI is used in a manner consistent with the President’s authority as commander-in-chief to decide when to order military operations in the nation’s defense; and ensure that military use of AI capabilities is accountable, including through such use during military operations within a responsible human chain of command and control. AI use in military contexts shall adhere to the principles and measures articulated in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, announced by the United States on November 9, 2023. This AI Framework includes four primary pillars relating to:

1. Identifying prohibited and “high-impact” AI use cases based on risk they pose to national security, international norms, democratic values, human rights, civil rights, civil liberties, privacy, or safety, as well as AI use cases that impact Federal personnel.

2. Creating sufficiently robust minimum-risk management practices for those categories of AI that are identified as high impact, including pre-deployment risk assessments. 1 In the AI NSM, covered Department Heads include the Secretary of Defense, Director of National Intelligence, Attorney General, Secretary of Homeland Security, Secretary of Energy, Secretary of State, Secretary of Treasury, Secretary of Commerce, and any other Department Head of a covered agency that uses AI as part of a National Security System. 2 This AI Framework is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

3 This AI Framework applies to both new and existing AI developed, used, or procured by or on behalf of the U.S. Government, and it applies to system functionality that implements or is reliant on AI, rather than to the entirety of an information system that incorporates AI.

4 Updates to this AI Framework shall be made pursuant to a National Security Council (NSC) Deputies Committee meeting through the process described in National Security Memorandum-2 of February 4, 2021 (Renewing the National Security Council System).

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.