RAND |. AI Security: Safeguarding Large Language Models and Why This Matters for the Future of Geopolitics
8 Aug 2024
Given the dramatic, rapid, and unpredictable rate of change of AI capabilities, there is an urgent need for robust, forward-thinking strategies to ensure the security of AI systems. As many national governments have acknowledged, AI models may soon be critical for national security: They could potentially drive advantages in strategic competition—and, in the wrong hands, enable significant harm. RAND gathered experts in artificial intelligence (AI) and global security for a moderated panel discussion on the increasingly important topic of securing AI and the implications for national and homeland security. Richard Danzig, former U.S. Secretary of the Navy, delivered keynote remarks. This panel followed the publication of RAND research on securing model weights—the learnable parameters that encode the core intelligence of an AI.
Former US Navy Secretary Richard Danzig on the AI threat to democracy: pic.twitter.com/7ZoXzsyhwy
— ControlAI (@ai_ctrl) August 28, 2024
Former US Navy Secretary Richard Danzig on the danger of AI development: "We're dealing with machinery here that is self-replicating, that has the potential for amplifying itself … by designing itself"
"This sets off … a kind of chain reaction … and a very troublesome one" pic.twitter.com/9UxIpqXvJS
— ControlAI (@ai_ctrl) August 23, 2024
CISA Chief AI Officer Lisa Einstein on US government efforts on AI governance:
— Agencies are required to put on their websites how they use AI
— Agencies are required to have chief AI officers, who have to evaluate use cases for whether they impact safety pic.twitter.com/Lp3bzdowYc— ControlAI (@ai_ctrl) August 27, 2024
NSA AI Security Research Lead Tara Michels on AI risks:
— We're not prepared for AI superstructures, where a model uses the output of another
— The speed, sophistication, and scale with which attacks can be accomplished lowers the barrier to entry for cyberthreats and many others pic.twitter.com/MUbFdAwjys— ControlAI (@ai_ctrl) August 27, 2024