FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

NATURE. The world’s week on AI safety: powerful computing efforts launched to boost research

UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems.

03 NOV 2023

Two major steps towards governmental oversight of artificial intelligence (AI) occurred this week in the United States and the United Kingdom. Both initiatives represent moves by the two nations to boost their AI research capabilities, and include efforts to broaden access to the powerful supercomputers needed to train AI systems.

On 30 October, US President Joe Biden signed his nation’s first AI executive order, with a huge swath of directives for US federal agencies to guide the use of AI — and put guardrails on the technology. And on 1–2 November, the United Kingdom hosted a high-profile AI Safety Summit, convened by Prime Minister Rishi Sunak, with representatives from more than two dozen countries and from tech companies including Microsoft and Meta. The summit, held at the famed wartime code-breaking facility of Bletchley Park near Milton Keynes, produced the Bletchley Declaration, which agrees to better assess and manage the risks of powerful ‘frontier’ AI — advanced systems that could be used to develop risky technologies, such as biological weapons.

“We’re talking about AI that doesn’t yet exist — the things that are going to come out next year,” says Yoshua Bengio, an AI pioneer and scientific director of Mila, the Quebec AI Institute in Canada, who attended the summit.

Both nations have committed to developing a national AI ‘research resource’, with the aim of providing AI researchers with cloud access to heavy-hitting computing power. The United Kingdom, in particular, has made a “massive investment”, says Russell Wald, who leads the policy and society initiative at the Stanford Institute for Human-Centered Artificial Intelligence in California.

These efforts are meaningful for a branch of science that relies heavily on expensive computing infrastructure, says policy researcher Helen Toner at Georgetown University’s Center for Security and Emerging Technology in Washington DC. “A major trend in the last five years of AI research is that you can get better performance from AI systems just by scaling them up. But that’s expensive,” she says.

“Training a frontier AI system takes months and costs tens or hundreds of millions of dollars,” agrees Bengio. “In academia, this is currently impossible.” One of the aims of the research-resource initiatives is to democratize these capabilities.

“It’s a good thing,” says Bengio. “Right now, all of the capabilities to work with these systems is in the hands of companies that want to make money from them. We need academics and government-funded organizations that are really working to protect the public to be able to understand these systems better.”

All the bases

Biden’s executive order is limited to guiding the work of federal agencies, because it is not a law passed by Congress. Nevertheless, says Toner, the order has a broad reach. “What you can see is the Biden administration really taking AI seriously as an all-purpose tech, and I like that. It’s good that they’re trying to cover a lot of bases.”

One important emphasis in the order, says Toner, is on creating much-needed standards and definitions in AI. “People will use words like ‘unbiased’, ‘robust’ or ‘explainable’,” to describe AI systems, she says. “They all sound good, but in AI, we have almost no standards for what these things really mean. That’s a huge problem.” The order calls for the US National Institute of Standards and Technology to develop such standards, alongside tools (such as watermarks) and ‘red-team testing’ — in which good actors try to misuse a system to test its security — to help to ensure that powerful AI systems are “safe, secure and trustworthy”.

The executive order directs agencies that fund life-sciences research to establish standards to protect against AI being used to engineer dangerous biological materials.

Agencies are also encouraged to help skilled immigrants with AI expertise to study, stay and work in the United States. And the National Science Foundation (NSF) must fund and launch at least one regional innovation engine programme that prioritizes AI-related work, and, in the next 18 months, establish at least 4 national AI research institutes, on top of the 25 currently funded.

Research resources

Biden’s order commits the NSF to launching a pilot of the National AI Research Resource (NAIRR) within 90 days. This is a proposed system to provide access to powerful, AI-capable computing power through the cloud, a distributed set of servers. “There’s a fair amount of excitement about this,” says Toner.

“It’s something we’ve been championing for years,” says Wald. “This is recognition at the highest level that there’s need for this.”

In 2021, Wald and his colleagues at Stanford published a white paper with a blueprint of what such a service might look like. In January, a White House NAIRR task-force report called for its budget to be US$2.6 billion over an initial period of six years. “That’s peanuts,” says Wald. “In my view it should be substantially larger.” Lawmakers will have to pass the CREATE AI Act, a bill introduced in July 2023, to release funds for a full-scale NAIRR, he says. “We need Congress to step up and take this seriously, and fund and invest,” says Wald. “If they don’t, we’re leaving it to the companies.”

Similarly, the UK plans to set up a national AI Research Resource (AIRR) to provide supercomputer-level computing power to diverse researchers keen on studying frontier AI.

The UK government announced plans for the UK AIRR in March. At the summit, the government said that it would triple an AIRR funding pot from £100 million (US$124 million) to £300 million, as part of a previous £900-million investment to transform UK computing capacity. Given the nation’s population and gross domestic product, the UK investment is much more substantial than the US proposal, says Wald.

The UK plan is backed by two new supercomputers: Dawn in Cambridge, which operators aim to have up and running in the next two months; and the Isambard-AI cluster in Bristol, which is expected to come online next summer.

Isambard-AI will be one of the world’s top-five AI-capable supercomputers, says Simon McIntosh-Smith, director of the Isambard National Research Facility at the University of Bristol, UK. Alongside Dawn, he says, “these capabilities mean that UK researchers will be able to train even the largest frontier models being conceived, in a reasonable amount of time”.

Such moves are helping countries like the United Kingdom to develop the expertise needed to guide AI for the public good, says Bengio. But legislation will also be needed, he says, to safeguard against the development of future AI systems that are smart and hard to control.

“We are on a trajectory to build systems that are extremely useful and potentially dangerous,” he says. “We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same.”

Nature 623, 229-230 (2023)

doi: https://doi.org/10.1038/d41586-023-03472-x

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

2023-11-08T02:52:15+00:00November 3, 2023|
Go to Top