“A.I. may also make it easier to manipulate people, in ways that recall Orwell. A study released this year found that when Chat GPT-4 had access to basic information about people it engaged with, it was about 80 percent more likely to persuade someone than a human was with the same data. Congress was right to worry about manipulation of public opinion by the TikTok algorithm.” —

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE NEW YORK TIMES. OPINION. NICHOLAS KRISTOF.

A.I. May Save Us or May Construct Viruses to Kill Us.

July 27, 2024

Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.

That’s the conclusion of Jason Matheny, the president of the RAND Corporation, a think tank that studies security matters and other issues.

“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny told me.

In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response.

I told Matheny that I was The Times’s Tokyo bureau chief when a religious cult called Aum Shinrikyo used chemical and biological weapons in terrorist attacks, including one in 1995 that killed 13 people in the Tokyo subway. “They would be capable of orders of magnitude more damage” today, Matheny said.

I’m a longtime member of the Aspen Strategy Group, a bipartisan organization that explores global security issues, and our annual meeting this month focused on artificial intelligence. That’s why Matheny and other experts joined us — and then scared us.

In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with A.I. it could be easier and cheaper to refine the virus.

One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.

Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.

A.I. has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.

One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” Kent Walker, president of global affairs at Google, told me.

Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.

So it’s unclear whether A.I. will save us or kill us first.

Scientists for years have explored how A.I. may dominate warfare, with autonomous drones or robots programmed to find and eliminate targets instantaneously. Warfare may come to involve robots fighting robots.

Robotic killers will be heartless in a literal sense, but they won’t necessarily be particularly brutal. They won’t rape and they might also be less prone than human soldiers to rage that leads to massacres and torture.

One great uncertainty is the extent and timing of job losses — for truck drivers, lawyers and perhaps even coders — that could amplify social unrest. A generation ago, American officials were oblivious to the way trade with China would cost factory jobs and apparently lead to an explosion of deaths of despair and to the rise of right-wing populism. May we do better at managing the economic disruption of A.I.

One reason for my wariness of A.I. is that while I see the promise of it, the past 20 years have been a reminder of technology’s capacity to oppress. Smartphones were dazzling — and apologies if you’re reading this on your phone — but there’s evidence tying them to deteriorating mental health of young people. A randomized controlled trial published just this month found that children who gave up their smartphones enjoyed improved well-being.

Dictators have benefited from new technologies. Liu Xiaobo, the Chinese dissident who received a Nobel Peace Prize, thought that “the internet is God’s gift to the Chinese people.” It did not work out that way: Liu died in Chinese custody, and China has used A.I. to ramp up surveillance and tighten the screws on citizens.

A.I. may also make it easier to manipulate people, in ways that recall Orwell. A study released this year found that when Chat GPT-4 had access to basic information about people it engaged with, it was about 80 percent more likely to persuade someone than a human was with the same data. Congress was right to worry about manipulation of public opinion by the TikTok algorithm.

All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China.

President Biden is on top of this, and limits he placed on China’s access to the most advanced computer chips will help preserve our lead. The Biden administration has recruited first-rate people from the private sector to think through these matters and issued an important executive order last year on A.I. safety, but we will also need to develop new systems in the coming years for improved governance.

I’ve written about A.I.-generated deepfake nude images and videos, and the irresponsibility of both the deepfake companies and major search engines that drive traffic to deepfake sites. And tech companies have periodically used immunities to avoid accountability for promoting the sexual exploitation of children. None of that inspires confidence in these companies’ abilities to self-govern responsibly.

“We’ve never had a circumstance in which the most dangerous, and most impactful, technology resides entirely in the private sector,” said Susan Rice, who was President Barack Obama’s national security adviser. “It can’t be that technology companies in Silicon Valley decide the fate of our national security and maybe the fate of the world without constraint.”

I think that’s right. Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.

LEARN MORE

On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial

21 Mar 2024

The development and popularization of large language models (LLMs) have raised concerns that they will be used to create tailor-made, convincing arguments to push false or misleading narratives online. Early work has found that language models can generate content perceived as at least on par and often more persuasive than human-written messages. However, there is still limited knowledge about LLMs’ persuasive capabilities in direct conversations with human counterparts and how personalization can improve their performance. In this pre-registered study, we analyze the effect of AI-driven persuasion in a controlled, harmless setting. We create a web-based platform where participants engage in short, multiple-round debates with a live opponent. Each participant is randomly assigned to one of four treatment conditions, corresponding to a two-by-two factorial design: (1) Games are either played between two humans or between a human and an LLM; (2) Personalization might or might not be enabled, granting one of the two players access to basic sociodemographic information about their opponent. We found that participants who debated GPT-4 with access to their personal information had 81.7% (p < 0.01; N=820 unique participants) higher odds of increased agreement with their opponents compared to participants who debated humans. Without personalization, GPT-4 still outperforms humans, but the effect is lower and statistically non-significant (p=0.31). Overall, our results suggest that concerns around personalization are meaningful and have important implications for the governance of social media and the design of new online environments.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.