FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Future of Life Institute Newsletter: Our Most Realistic Nuclear War Simulation Yet
Welcome to the Future of Life Institute newsletter. Every month, we bring 28,000+ subscribers the latest news on how emerging technologies are transforming our world – for better and worse.

If you’ve found this newsletter helpful, why not tell your friends, family and colleagues to subscribe here?

Today’s newsletter is a 6-minute read. We cover:

  • FLI releases our most realistic nuclear war simulation yet
  • The EU AI Act draft has been passed by the European Parliament
  • Leading AI experts debate the existential threat posed by AI
  • New research shows how large language models can aid bioterrorism
  • A nuclear war close call, from one faulty chip
A Frighteningly Realistic Simulation of Nuclear War
On Thursday, we released our most scientifically realistic simulation of what a nuclear war between Russia and the United States might look like, accompanied by a Time article written by FLI President Max Tegmark.

Here’s what detailed modelling says most major cities around the world will experience in nuclear war:

  • An electromagnetic pulse blast knocking out all communications.
  • An explosion that melts streets and buildings.
  • A shockwave that shatters windows and bones.
  • Soot and debris that rises so high into the atmosphere, it shrouds the earth, catastrophically dropping global temperatures and consequently starving 99% of all humans.
How would a nuclear war between Russia and the US affect you personally?
The Bulletin of the Atomic Scientists Doomsday Clock has never been closer to midnight as it is now. Learn more about the risks posed by nuclear weapons and find out how you can take action to reduce the risks here.

Additionally, check out our previous and ongoing work on nuclear security:

  • We are thrilled to reveal the grantees of our Humanitarian Impacts of Nuclear War program. Find the list of projects here, totalling over $4 million of grants awarded in this round.
  • Our video further explaining the nuclear winter consequences of nuclear war: The Story of Nuclear Winter.
  • The 2022 Future of Life Award was given to eight individuals for their contributions to the discovery of and raising awareness for nuclear winter. Watch their stories here.
European Lawmakers Agree on Draft AI Act
In mid-June, the European Parliament passed a draft version of the EU AI Act, taking further steps to implement the world’s most comprehensive artificial intelligence legislation to date. We’re hopeful this is the first of similar sets of rules to be established by other governments.

From here, the agreed-upon version of the AI Act moves to negotiations starting in July between the EU Parliament, Commission, and all 27 member states. EU representatives have shared their intention to finalise the Act by the end of 2023.

Wondering how well foundation model providers such as OpenAI, Google, and Meta currently meet the draft regulations? Researchers from the Stanford Center for Research on Foundation Models and Stanford Institute for Human-Centered AI published a paper evaluating how well 10 such companies comply, finding significant gaps in performance. You can find their paper in full here.

Stay in the loop: 

As the EU AI Act progresses to its final version, be sure to follow our dedicated AI Act website and newsletter to stay up-to-date with the latest developments.

AI Experts Debate Risk
FLI President Max Tegmark, together with AI pioneer Yoshua Bengio, took part in a Munk Debate supporting the thesis that AI research and development poses an existential threat. AI researcher Melanie Mitchell and Meta’s Chief AI Scientist Yann LeCun opposed the motion.

When polled at the start of the debate, 67% of the audience agreed that AI poses an existential threat, with 33% disagreeing; by the end, 64% agreed while 36% disagreed with the thesis.

It’s also worth noting that the Munk Debate voting application didn’t work at the end of the debate, so final results reflect audience polling via email within 24 hours of the debate, instead of immediately following its conclusion.

You can find the full Munk Debate recording here, or listen to it as a podcast here.

Governance and Policy Updates
AI policy:

▶ U.S. Senate Majority Leader Chuck Schumer announced his “SAFE Innovation” framework to guide the first comprehensive AI regulations in the U.S.; he also announced “AI Insight Forums” which would convene AI experts on a number of AI-related concerns.

▶ The Association of Southeast Asian Nations, covering a 10-member, 668 million person region, is set to establish “guardrails” on AI by early 2024.

▶ U.S. President Joe Biden met with civil society tech experts to discuss AI risk mitigation. Meanwhile, the 2024 election approaches with no clarity on how the use of AI in election ads and related media is to be controlled.

Climate change: 

▶ A report released Wednesday by the UK’s Climate Change Committee expressed concern about the British Government’s ability to meet its Net Zero goals, citing its lack of urgency.

▶ In more hopeful news, Spain is significantly raising its clean energy production targets over the next decade, now expecting to have 81% of power produced by renewables by 2030.

Updates from FLI
▶ FLI President Max Tegmark spoke on NPR about the use of autonomous weapons in battle, and the need for a binding international treaty on their development and use.
FLI’s Dr. Emilia Javorsky took part in a panel discussing the intersection between artificial intelligence and creativity, put on by Hollywood, Health & Society at the ATX TV Festival in early June. The recording is available here.
At the 7th Annual Center for Human-Compatible AI Workshop, FLI board members Victoria Krakovna and Max Tegmark, along with external advisor Stuart Russell, participated in a panel on aligning large language models to avoid harm.
▶ Our podcast host Gus Docker interviewed Dr. Roman Yampolskiy about objections to AI safety, Dan Hendrycks (director of the Center for AI Safety) on his evolutionary perspective of AI development, and Joe Carlsmith on how we change our minds about AI risk.
Roman Yampolskiy on Objections to AI Safety
Dan Hendrycks on Why Evolution Favors AIs over Humans
Joe Carlsmith on How We Change Our Minds About AI Risk
New Research: Could Large Language Models Cause a Pandemic?
AI meets bio-risk: A concerning new paper outlines how, after just one hour of interaction, large language models (LLMs), like the AI system behind ChatGPT, were able to provide MIT undergraduates with a selection of four pathogens that could be weaponised, and even tangible strategies to engineer such pathogens into a deadly pandemic.

Why this matters: While AI presents many possible risks on its own, AI converging with biotechnology is another area of massive potential harm. The ability for LLMs to be leveraged in this way creates new opportunities for bad actors to create biological weapons, requiring far less expertise than ever before.

What We’re Reading
▶ Categorizing Societal-Scale AI Risks: AI researchers Andrew Critch and Stuart Russell developed an exhaustive taxonomy, centred on accountability, of societal-scale and extinction-level risks to humanity from AI.

▶ Looking Back: Writer Scott Alexander presents a fascinating retrospective of AI predictions and timelines, comparing survey results from 352 AI experts in 2016 to the current status of AI in 2023.

▶ FAQ on Catastrophic AI Risk: AI expert Yoshua Bengio posted an extensive discussion of frequently asked questions about catastrophic AI risks.

What We’ll Be Watching: On 10 July, Netflix is releasing its new “Unknown: Killer Robots” documentary on autonomous weapons, as part of its “Unknown” series. FLI’s Dr. Emilia Javorsky features in it – you can watch the newly-released trailer here.

UNKNOWN: Killer Robots | Official Trailer | Netflix
Hindsight is 20/20
Source: National Archives, Still Pictures Branch, RG 306-PSE, box 79
Source: National Archives, Still Pictures Branch, RG 306-PSE, box 79
In the early hours of 3 June 1980, the message that usually read “0000 ICBMs detected 0000 SLBMs detected” on missile detection monitors at U.S. command centres suddenly changed, displaying varying numbers of incoming missiles.

U.S. missiles were prepared for launch and nuclear bomber crews started their engines, amongst other preparatory initiatives. Thankfully, before any action was taken, the displayed numbers were reassessed. There had been a glitch.

The same false alarm happened three days later, before the faulty computer chip was finally discovered. The entire system – on which world peace was dependent – was effectively unreliable for several days because of one defective component.

In this case, and in so many other close calls, human reasoning helped us narrowly avoid nuclear war. How many more close calls will we accept?

Chart of the Month
The trajectory of AI computational power now has it predicted to exceed that of a human brain within 10-15 years, if not sooner.
To note: human-level capabilities may be, or have already been, reached long before human-level compute is reached.
Computation Used to Train AI Models With Bioanchors by Taylor Jones is marked with CC0 1.0 Universal
FLI is a 501(c)(3) non-profit organisation, meaning donations are tax exempt in the United States. If you need our organisation number (EIN) for your tax return, it’s 47-1052538. FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.