FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

BBC NEWS. AI Safety: UK and US sign landmark agreement 

By Liv McMahon & Zoe Kleinman, BBC News

The UK and US have signed a landmark deal to work together on testing advanced artificial intelligence (AI).

The agreement signed on Monday says both countries will work together on developing “robust” methods for evaluating the safety of AI tools and the systems that underpin them.

It is the first bilateral agreement of its kind.

UK tech minister Michelle Donelan said it is “the defining technology challenge of our generation”.

“We have always been clear that ensuring the safe development of AI is a shared global issue,” she said.

“Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The secretary of state for science, innovation and technology added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023.

The event, attended by AI bosses including OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and tech billionaire Elon Musk, saw both the UK and US create AI Safety Institutes which aim to evaluate open and closed-source AI systems.

While things have felt quiet on the AI safety front since the summit, the AI sector itself has been extremely busy.

Competition between the biggest AI chatbots – such as ChatGPT, Gemini, Claude – remains ferocious.

So far the almost exclusively US-based firms behind all of this activity are still cooperating with the concept of regulation, but regulators have yet to curtail anything these companies are trying to achieve.

Similarly, regulators have not demanded access to information the AI firms are unwilling to share, such as the data used to train their tools orthe environmental cost of running them.

TheEU’s AI Act is on its way to becoming law and once it takes effect it will require developers of certain AI systems to be upfront about their risks and share information about data used.

This is important, after OpenAIrecently saidit would not release a voice cloning tool it developed due to “serious risks” the tech presents, particularly in an election year.

In January, a fake, AI-generated robocall claiming to be from US President Joe Biden urged voters to skip a primary election in New Hampshire.

Currently in the US and UK, AI firms are mostly regulating themselves.

AI concerns

Currently, the majority of AI systems are only capable of performing single, intelligent tasks that would usually be completed by a human.

Known as “narrow” AI, these tasks can range from quickly analysing data or providing a desired response to a prompt.

But there are fears that more intelligent “general” AI tools – capable of completing a range of tasks usually performed by humans – could endanger humanity.

“AI, like chemical science, nuclear science, and biological science, can be weaponised and used for good or ill,” Prof Sir Nigel Shadbolt told the BBC’s Today programme.

But the University of Oxford professor said fears around AI’s existential risk “are sometimes a bit overblown”.

“We’ve got to be really supportive and appreciative of efforts to get great AI powers thinking about and researching what the dangers are,” he said.

“We need to understand just how susceptible these models are, and also how powerful they are.”

Gina Raimondo, the US commerce secretary, said the agreement will give the governments a better understanding of AI systems, which will allow them to give better guidance.

“It will accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” she said.

“Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

Learn more: U.S. and UK Announce Partnership on Science of AI Safety. U.S. Department of Commerce

U.S. and UK AI Safety Institutes to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety

Institutes to develop shared capabilities through information-sharing, close cooperation, and expert personnel exchanges

The U.S. and UK have today signed a Memorandum of Understanding (MOU) which will see them work together to develop tests for the most advanced AI models, following through on commitments made at the AI Safety Summit last November.

Signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, the partnership will see both countries working to align their scientific approaches and working closely to accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.

The U.S. and UK AI Safety Institutes have laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively. They intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes.

The partnership will take effect immediately and is intended to allow both organizations to work seamlessly with one another. AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technology’s emerging risks. As the countries strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe.

“AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance,” said U.S. Secretary of Commerce Gina Raimondo. “By working together, we are furthering the long-lasting special relationship between the U.S. and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future.”

“This agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation,” said UK Secretary of State for Science, Innovation, and Technology, Michelle Donelan. “We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives. The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly.”

Reflecting the importance of ongoing international collaboration, today’s announcement will also see both countries sharing vital information about the capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security. This will work to underpin a common approach to AI safety testing, allowing researchers on both sides of the Atlantic—and around the world—to coalesce around a common scientific foundation.

LEADERSHIP

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.