FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

There’s one last risk I can’t ignore. You’ll have noticed throughout my remarks that technology is an enabler of every modern threat, which brings me to AI. Wouldbe terrorists already try to harness AI for their propaganda, their weapons research, their target reconnaissance. State actors exploit AI to manipulate elections and sharpen their cyber attacks. AI is far from all downside. Of course, it presents precious opportunities for MI5 with GCHQ, MI6, and others to enhance our ability to defend the UK. My teams already use AI ethically and responsibly across our investigations, conducting automated trolls of images we’ve collected to instantly spot the one with a gun in it. Searching across large volumes of messages between suspects to clock the buried phrase that reveals an assassination plot. AI tools are making us more effective and more efficient in our core work. MI5 has spent more than a century doing ingenious things to out innovate our human, sometimes inhuman adversaries. But in 2025, while contending with today’s threats, we also need to scope out the next frontier, potential future risks from nonhuman autonomous AI systems, which may evade human oversight and control. Given the risk of hype and scaremongering, I’ll choose my words carefully. I’m not forecasting Hollywood movie scenarios. I am on the whole a tech optimist who sees AI bringing real benefits. But as AI capabilities continue to power ahead, you would expect organizations like MI5 and GCHQ and the UK’s groundbreaking AI security institute to be thinking deeply today about what defending the realm might need to look like in the years ahead. Artificial intelligence may never mean us harm, but it would be reckless to ignore the potential for it to cause harm. We’re on the case.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.