A very good read from a highly respected source!

Bengio cautioned that the kind of advances in AI that still sound like science fiction could become a reality far sooner than we expect.

“We need to be prepared, otherwise we could find that the pace of change is too fast for us to deal with. That could lead to a huge amount of instability”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE LANCET. Crossing the frontier: the first global AI safety summit

Talha Burki

Open Access

Published:January 04, 2024

From Nov 1–2, 2023, the UK hosted the first global summit on artificial intelligence (AI) safety. It was held at Bletchley Park, the country house and estate in southern England that was home to the team that deciphered the Enigma code. 150 or so representatives from national governments, industry, academia, and civil society attended, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and senior executives from major technology companies. The focus was on frontier AI, which was defined by the UK Government in a publication released in advance of the summit as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”. In other words, technologies on the cutting edge and beyond.
The UK Government cited two key concerns: misuse risks and loss-of-control risks. An extreme example of misuse would be if an AI system assisted in the construction of a chemical or biological weapon. As things stand, there are just a few dozen people around the world who have the expertise to build a chemical or biological weapon. AI could lower the barriers to entry for an individual or group seeking to do something destructive.
“We know how easy it is to create candidate toxins using current AI”, said Yoshua Bengio, Professor in the Department of Computer Science and Operations Research at Université de Montreal (QC, Canada). “If it is a bacteria or virus, with the AI system proposing the alteration of genes so as to make something more lethal, we could end up in an extremely dangerous situation”. Bengio’s group has been using machine learning for designing drugs and proteins. Such research is not overseen by statutory standards in Canada, nor does it require ethical review. If an AI system with similar design capabilities, but without adequate safety precautions, got into the wrong hands, there could be serious consequences. Moreover, what constitutes adequate safety precautions has yet to be established.
An example of loss of control would be if AI pushed a nation in a direction that was contrary to its values or likely to cause unrest. If a technology resulted in large numbers of jobs becoming obsolete, for example, countries would have to deal with the prospect of mass unemployment and the consequences of this for social cohesion, health, and well-being. Loss of control could also occur through an AI system becoming autonomous and prioritising its own goals over avoiding harm to human beings. In health care, loss of control could occur through an AI system usurping the autonomy of the physician. Even if less dramatic, losses of control could still be a nuisance. Misfiring AI systems can be costly and time-consuming; this could lead to avoidable patient harm.
The summit culminated with the release of the Bletchley Declaration, which was signed by 28 countries plus the EU. The declaration emphasised the urgency of the task ahead, both in terms of coming to grips with the risks of frontier AI and of establishing an internationally cooperative response. The attendees of the Bletchley Park summit pledged to collaborate on testing frontier AI systems against various potential harms, such as those related to national security. They agreed that governments should play a major role in ensuring that safety testing regimens are fit-for-purpose.
A State of the Science report was commissioned and will be chaired by Bengio. “We are not going to be making recommendations or proposing regulation”, he told The Lancet Digital Health. “The idea is to summarise what we know about the capabilities and risks of these AI systems. It will then be up to policymakers to choose how they balance innovation and safety”. The report will assess trends in frontier AI, outline the different kinds of harms and risks, and define priority areas for research to improve the safety of AI systems and ensure that they are aligned with human values and social norms. The report will be regularly updated and its findings will be used to inform future AI safety summits, two of which have been scheduled for 2024.
The inaugural safety summit also marked the launch of the UK Artificial Intelligence Safety Institute. “Its mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI”, stated the policy paper introducing the new institute. It will undertake evaluations of AI systems, for example by assessing whether a malign actor could readily turn the system to a harmful purpose, and invest in foundational AI safety research.
Saira Ghafur, Lead for Digital Health at the Institute of Global Health Innovation at Imperial College London, UK, welcomed the summit. “It opened up dialogue and mapped a way forward. There was a good turnout, especially in terms of Big Tech, though it would have been nice to see a few more academics and representatives from small companies”, she said. Bengio added that the summit had an important symbolic value. “Governments do not have a lot of bandwidth. Having them sit in one place for two days to talk about one particular subject brings attention”, he said.
2 days before the Bletchley Park event, US President Joe Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It mandated that developers of the most powerful AI systems share vital data on matters such as safety with the US Government, ordered the development of standards for biological synthesis screening, and underscored the importance of tackling algorithmic discrimination. The US Department of Energy and the US Department of Homeland Security have been tasked with assessing threats posed by AI to critical infrastructure alongside chemical, biological, radiological, nuclear, and cybersecurity risks.
Plans for EU regulation are also progressing. On Dec 8, 2023, the text of the proposed EU Artificial Intelligence Act was finalised. The act is underpinned by a risk-based framework in descending tiers: unacceptable, high, limited, and minimal or no risk. Technologies in the top tier would be banned altogether; those that are of high risk, such as systems that could endanger lives or infringe on fundamental rights, would require careful oversight. For limited risk systems or minimal risk systems, which make up the vast majority of AI technologies in current use, regulators would take a light touch. The Artificial Intelligence Act also includes transparency stipulations and testing requirements.
As AI becomes increasingly sophisticated, it will be imperative that researchers are aware of the possible consequences should a malign actor get hold of their system or code and know how best they can mitigate risk. Bengio mentioned the intriguing possibility of instructing manufacturers to make their AI systems unlearn certain patterns or information. This could be particularly useful in remediating biased datasets, ones that violated privacy or otherwise routed the system in an undesirable direction.
Health care has long been wrestling with how best to regulate AI. Anna Studman and Mavis Machirori are Senior Researchers at the Ada Lovelace Institute, a London-based research centre that focuses on data and AI. “In the UK, we have several regulatory bodies that have a say in how new healthcare technologies are deployed. We need them to come together with everyone who will actually be using novel AI systems, or are impacted by them, so that we can build a framework that directs how these technologies should and should not be used”, said Machirori. She added that it would be naive to rely on assurances from the manufacturers that their products do not, for example, exacerbate inequalities. “Big Tech has commercial interests. These companies cannot be the sole authorities determining what is good, safe and should continue to be used”, said Machirori.
Studman pointed out that regulators will need to be adequately resourced, given the expansion of their work and the importance of post-marketing surveillance. Responsibilities will have to be clarified. At present, the UK Medicines and Healthcare products Regulatory Agency (MHRA) is concerned primarily with safety, rather than inequalities. However, the two issues are not distinct. “Our research suggests that inequalities can lead to serious safety issues”, explained Studman. The UK Government has commissioned an independent review in equity in medical devices, the results from which have yet to be released.
Algorithms used as medical devices in the UK are already regulated by the MHRA. “You are not going to see hospitals rolling out untested diagnostic devices”, said Ghafur. “But there are questions around lower risk things. Is service provision going to be improved by the introduction of a particular AI system? Is it going to save money? We need to be better at making these assessments and ensuring that there is evidence for these tools ahead of more widespread use”. There is also the so-called black box issue. It is not always possible to understand the basis of the advice given by an AI system, which raises questions of accountability. “If something goes wrong, who is to blame?”, asked Ghafur. “There are a lot of issues that need to be addressed, in both the design and regulation of AI”.
Bengio cautioned that the kind of advances in AI that still sound like science fiction could become a reality far sooner than we expect. “We need to be prepared, otherwise we could find that the pace of change is too fast for us to deal with. That could lead to a huge amount of instability”, he concluded. The Bletchley Park summit was the first step in what will be an ongoing collaborative process aimed at ensuring that such instability does not come to pass.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.