FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

SCIENCE. AI and biosecurity: The need for governance. 

(PAYWALL)

Governments should evaluate advanced models and if needed impose safety measures

  • Abstract Great benefits to humanity will likely ensue from advances in artificial intelligence (AI) models trained on or capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields (1–3). But as with any powerful new technology, such biological models will also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity (4). Voluntary commitments among developers to evaluate biological models’ potential dangerous capabilities are meaningful and important but cannot stand alone. We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.

AXIOS. AI biosecurity concerns prompt call for national rules.

Alison Snyder

Aug 23, 2024

Biosecurity experts are calling on governments to set new guardrails in an effort to limit the risks posed by advanced AI models being applied to biology.

Why it matters: AI models trained on genetic sequences have double-edged potential to help scientists design new medicines and vaccines, but can also be used to create new or enhanced pathogens.

Where it stands: Experts from OpenAI and RAND, among other institutions, say today’s large language models don’t increase the risk of a bioweapon being created in part because there’s not enough data to train them.

  • There’s an impression today that “producing biological weapons with all of the information that AI can provide is easy, straightforward,” Sonia Ben Ouagrham-Gormley, deputy director of the Biodefense Graduate Program at George Mason University, said at a Center for New American Security event on Wednesday.
  • But “producing biological weapons is very complex, very complicated.”

Yes, but: Many researchers say it’s only a matter of time before AI models improve and it becomes a possibility.

  • “I think we need to listen to the AI developers about the potential capabilities of the next generation,” Anita Cicero, deputy director of the Johns Hopkins Center for Health Security, said at the event.
  • She also raised the possibility of automated labs and labs running remotely through the cloud using trial-and-error to try to make a new pathogen, which could mean less expertise is needed to conduct experiments.

What they’re saying: AI developers committing to evaluate models is “important but cannot stand alone,” Cicero and her co-authors wrote in the journal Science.

  • They call on governments to evaluate models trained on large amounts of biological data or especially sensitive data before they are released.
  • That could address potential risks without hampering academic freedom, they argue.
  • The group also wants companies and institutions that synthesize nucleic acids — turning genetic sequence information into physical molecules — to screen their customers and their orders. Beginning in October, federally funded researchers will be required to get their synthetic nucleic acids from providers who screen purchases.

Friction point: Some researchers say more work should be done first to understand what AI models can do in laboratory settings.

  • Ouagrham-Gormley argued that before AI biological models are regulated, “We absolutely need to better understand the capability itself, and then that will allow us to assess better the risks associated with those [models].”

Between the lines: A lot of work developing AI biological models is done in the private sector.

  • Government should focus on “advanced biological models that pose potential high-consequence threats, whether or not they were developed with federal funding assistance,” Cicero and her co-authors wrote.

The big picture: It’s incumbent on the U.S. and other leaders in the field to set up their own governance systems, Tom Inglesby, director of the Johns Hopkins Center for Health Security, told Axios.

  • “Ultimately, we do think that we should be aiming for international harmonization, just like we do for other kinds of safety and security issues around science,” he said.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.