FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Can large language models democratize access to dual-use biotechnology? 06 Jun 2023.

Download PDF

“These results strongly suggest that the existing evaluation and training process for LLMs, which relies heavily on reinforcement learning with human feedback (RLHF)18, is inadequate to prevent them from providing malicious actors with accessible expertise relevant to inflicting mass death. New and more reliable safeguards are urgently needed.”

Summary

Widely accessible artificial intelligence threatens to allow people without laboratory training to identify, acquire, and release viruses highlighted as pandemic threats in the scientific literature. Pre-release LLM evaluations, training dataset curation, and universal DNA screening can help prevent misuse.

Abstract

Large language models (LLMs) such as those embedded in ‘chatbots’ are accelerating and democratizing research by providing comprehensible information and expertise from many different fields. However, these models may also confer easy access to dual-use technologies capable of inflicting great harm. To evaluate this risk, the ‘Safeguarding the Future’ course at MIT tasked non-scientist students with investigating whether LLM chatbots could be prompted to assist non-experts in causing a pandemic. In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization. Collectively, these results suggest that LLMs will make pandemic-class agents widely accessible as soon as they are credibly identified, even to people with little or no laboratory training. Promising nonproliferation measures include pre-release evaluations of LLMs by third parties, curating training datasets to remove harmful concepts, and verifiably screening all DNA generated by synthesis providers or used by contract research organizations and robotic cloud laboratories to engineer organisms or viruses.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

INSIDER. AI chatbots helped college students come up with new pandemic pathogens they could spread.

Gabby Landsverk

Jun 16, 2023

  • A new MIT project found that AI provided students with information about causing a new pandemic.
  • Bots like ChatGPT provided examples of deadly pathogens and advice on how to obtain them.
  • AI can’t yet coach someone to cause a massive bioterrorist attack, but more security is needed.

Forget AI taking over people’s jobs — new research suggests chatbots could potentially contribute to bioterrorism, helping design viruses capable of causing the next pandemic, Axios reports.

Scientists from MIT and Harvard assigned students to investigate likely sources of a future pandemic, using bots like ChatGPT, an artificial intelligence model that can provide conversational answers to prompts about a wide variety of topics using an encyclopedic knowledge database.

Students spent an hour asking the chatbots about topics like pandemics-capable pathogens, transmission, and access to pathogens samples. The bots readily provided examples of dangerous viruses that would be particularly efficient at causing widespread damage, due to low immunity rates and high transmissibility.

For instance, the bots suggested variola major, otherwise known as the smallpox virus, because it could spread widely due to a lack of current vaccinations and similar viruses that might provide immunity.

The bots also helpfully advised students on how they might use reverse genetics to generate infectious samples, and even offered suggestions for where to obtain the right equipment.

The researchers noted in a paper summarizing the project that chatbots aren’t yet capable of helping someone without expertise engineer full-on biological warfare. And biotech experts told Axios that the threat could be offset by using AI to design antibodies that may protect people from future outbreaks.

However, the experiment’s results “demonstrate that artificial intelligence can exacerbate catastrophic biological risks,” and the potential fatality of pandemic-level viruses could be comparable to nuclear weapons, the researchers wrote.

The students also found it was easy to evade current safeguards set up to prevent chatbots from providing dangerous information to bad actors. As a result, more rigorous precautions are needed to clamp down on sensitive information shared via AI, the researchers concluded.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.