GOV UK. Press release. Leading frontier AI companies publish safety policies
Top frontier AI firms have outlined their safety policies to boost transparency and encourage the sharing of best practice within the AI community.
From:Department for Science, Innovation and Technology and The Rt Hon Michelle Donelan MP
27 OCT 2023.
- Top frontier AI firms including DeepMind have outlined their safety policies following a request from the Technology Secretary.
- Companies publish response as the UK Government also sets out safety processes for frontier AI companies to help keep their models safe as they continue to develop them and harness opportunities.
- It follows Prime Minister Rishi Sunak yesterday outlining the risks of AI and setting out the UK will establish the world’s first AI Safety Institute.
Leading AI companies have today (Friday 27 October) published their safety policies following a request from the Technology Secretary last month, in a move to boost transparency and encourage the sharing of best practice within the AI community.
It comes as the UK Government reveals a set of emerging safety processes for the companies, providing information on how they can keep their models safe – and is intended to inform discussions at Bletchley Park next week.
The government paper outlines practices for AI companies including implementing responsible capability scaling – a new framework for managing frontier AI risk and something several are already putting into action. This would see AI firms set out ahead of time what risks are going to be monitored, who is notified if these risks are found, and at what level of dangerous capabilities a developer would slow or, in fact, pause their work until better safety mechanisms are in place.
Other suggestions include AI developers employing third parties to try to hack their systems to identify sources of risk and potential harmful impacts, as well as providing additional information on whether content has been AI generated or modified. At the heart of these emerging safety practices is innovation, with the UK Government clear that the only way to seize the opportunities for economic growth and public good is by understanding the risks at the frontier of AI development.
Yesterday the Prime Minister confirmed the UK will establish the world’s first AI Safety Institute to advance the world’s knowledge of AI safety, and carefully examine, evaluate and test new types of AI so there is an understanding of what each new model is capable of. It will look to share information with international partners, policymakers, private companies, academia and civil society as part of efforts to collaborate on AI safety research. Today’s announcement from the leading frontier AI companies begins the conversation about safety policies which the AI Safety Institute can now take forward through its programme of research, evaluation and information sharing working with the government’s AI Policy team.
New findings published today show international support for a government-backed AI safety institute to evaluate powerful AI to test if it is safe, with 62% of Brits surveyed backing the idea. The survey of international public opinion on AI safety across nine countries, including Canada, France, Japan, the UK and USA, amongst others, saw strong support in most nations for powerful AI to be tested by independent experts. Most respondents in all countries agreed with this, ranging from 59% in Japan to 76% in the UK and Singapore. When asked who they would trust to have overall responsibility for ensuring AI is safe, an AI safety institute was the most popular option in seven of the nine countries surveyed, and often by some distance.
Today’s paper contains processes and associated practices that some frontier AI organisations are already implementing and others that are being considered within academia and broader civil society. While there may be some processes and practices relevant for different kinds of AI organisations, others – such as responsible capability scaling – are specifically developed for frontier AI and are not designed for lower capability or non-frontier AI systems.
Technology Secretary Michelle Donelan said:
This is the start of the conversation and as the technology develops, these processes and practices will continue to evolve, because in order to seize AI’s huge opportunities we need to grip the risks.
We know openness is key to increasing public trust in these AI models which in turn will drive uptake across society meaning more will benefit, so I welcome AI developers publishing their safety policies today.
Today’s paper also highlights the long-standing technical challenges in building safe AI systems, including safety evaluations and understanding how they make decisions. Safety best practices have not yet been established for frontier AI development – which is why the UK Government has published emerging processes to inform the vital discussion of safe frontier AI at the summit.
It follows a new government discussion paper published yesterday pointing to rapid recent progress in frontier AI which is expected to continue in the coming years. This could see these models evolve at ever-greater speed, leading to a danger they will exceed human understanding, and even human control.
The UK recognises the enormous opportunities AI can unlock across the economy and society – however, without appropriate guardrails, such technologies can pose significant risks. The AI Safety Summit will focus on how best to manage the risks from frontier AI such as misuse, loss of control and societal harms. Frontier AI organisations play an important role in addressing these risks and promoting the safety of the development and deployment of frontier AI.
Frontier AI Taskforce Chair Ian Hogarth said:
We have focused on Frontier AI at next week’s summit very deliberately as these are the models which are most capable.
While Frontier AI brings opportunities, more capable systems can also bring increased risk. AI companies providing increased transparency of their safety policies is a first step towards providing assurance that these systems are being developed and deployed responsibly.
Over the last few months the UK Government’s Frontier AI Taskforce has been recruiting leading names from all areas of the AI ecosystem, from security to computer science, to advise on the risks and opportunities from AI with the Prime Minister yesterday hailing it a huge success.
Today’s publication on emerging safety practices is intended to support frontier AI companies to establish effective AI safety policies.
Adam Leon Smith, of BCS, The Chartered Institute for IT, and Chair of its Fellows Technical Advisory Group (F-TAG) said:
This set of emerging, adaptable processes and practices moves the industry forwards significantly, and sets a new bar for research and development.
It is challenging to talk about how to manage safety when we are dealing in some cases with systems that are too advanced for us to have yet built – but it’s important to have the vision and courage to anticipate the risks.
The processes here also provide inspiration and best practices that may be useful for managing the risks posed by many AI systems already on the market.
The UK is hosting the AI Safety Summit as the government looks long-term at the tough decisions that need to be taken for a brighter future for the next generation, powered by AI developments.
Notes to editors
- Read the full report on Emerging processes for frontier AI safety.
- Read the outlines of each leading AI companies safety policies on the AI Safety Summit website.
- The 6 companies publishing are Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon and Meta.
- Read more information on the AI Safety Summit on GOV.UK.
- The set of emerging processes do not represent government policy that must be enacted but is intended as point of reference to inform the development of frontier AI organisations’ safety policies and as a companion for readers of these policies.
- Read the International survey of public opinion on AI safety.