A very good read from a respected source!


POLITICO. Inside the shadowy global battle to tame the world’s most dangerous technology.

Can anyone control AI?

By Mark Scott, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard and Brendan Bordelon

On a wet November afternoon, U.S. Vice President Kamala Harris and Meta’s Nick Clegg trudged into a large tent in the grounds of a 19th century English country house north of London, took their seats at a circular table, and, among other things, set about trying to save humanity.

Amid the gloomy weather at Bletchley Park, which had been used as the Allied World War II code-breaking headquarters, Clegg and Harris joined an elite gathering of global leaders, academics and tech executives to tackle what pessimists fear is a devastating new threat facing the world: runaway artificial intelligence.

The politicians and tech executives agreed on a joint declaration of good intentions after two days of talks, but no unified answer on what should be done. Instead, they made rival pitches on how to manage a technology that will dominate much of the next decade — and will likely upend everything from business and health care to democracy itself.

Clegg, a former British deputy prime minister, argued that policing AI was akin to building a plane already in flight — inherently risky and difficult work. Harris trumpeted Washington’s efforts to address the dangers of AI through voluntary business agreements as the world’s gold standard. Ursula von der Leyen, the European Commission president, who was also in attendance, urged others to follow Brussels’ new, legally binding rulebook to crack down on the tech.

The debate represented a snapshot of a bigger truth. For the past year, a political fight has been raging around the world, mostly in the shadows, over how — and whether — to control AI. This new digital Great Game is a long way from over. Whoever wins will cement their dominance over Western rules for an era-defining technology. Once these rules are set, they will be almost impossible to rewrite.

For those watching the conversation firsthand, the haggling in the British rain was akin to 19th-century European powers carving up the world.

“It felt like an alternate reality,” said Amba Kak, head of the AI Now Institute, a nonprofit organization, who participated in the discussions. At the end of the meeting, 29 countries, including China, European Union members, and the United States, signed a voluntary agreement to reduce risks that have climbed the political agenda thanks to the arrival of OpenAI’s ChatGPT. 

In the year ahead, the cut-throat battle to control the technology will create winners and losers. By the end of 2024, policymakers expect many new AI standards to have been finalized.

For this article, POLITICO spoke to more than three dozen politicians, policymakers, tech executives and others, many of whom were granted anonymity to discuss sensitive matters, to understand the dynamics at play as the world grapples with this new disruptive technology.

The question they face is whether the U.S., the EU or the United Kingdom — or anyone else — will be able to devise a plan that Western democracies can agree on. If liberal industrialized economies fail to reach a common regime among themselves, China may step in to set the global rulebook for a technology that — in a doomsday scenario — some fear has the potential to wipe humanity off the face of the Earth.

As Brussels and Washington pitch their conflicting plans for regulating AI, the chances of a deal look far from promising.

Jousting in Japan

A month before the conference in the English rain, policymakers had been frantically trying to make progress on the other side of the world. It was October, and Věra Jourová stepped off her 16-hour flight from Brussels to Japan exhausted. The Czech politician was only a few weeks into her new role as the EU’s top tech envoy, and her first international assignment would not be easy.

Jourová’s mission was to sell Europe’s AI rulebook at a G7 meeting where Western leaders had gathered to try to design new global standards for the most advanced form of this technology, known as “generative AI,” the powerful development behind ChatGPT and rival tools.

Brussels’ approach is enshrined in the EU’s Artificial Intelligence Act, the world’s first attempt at binding legislation on the issue. Unlike the stance favored by the U.S., the EU’s vision includes bans on the most invasive forms of the technology and strict rules requiring companies like Google and Microsoft to be more open about how they design AI-based products.

“Generative AI invaded our lives so quickly and we need something fast,” Jourová told POLITICO after taking the two-hour bullet train from Tokyo to Kyoto for the summit.

At the three-day meeting in Japan, Nathaniel Fick had a rival pitch.

As Joe Biden’s top digital diplomat, Fick, a former tech executive, proposed no bans or strict requirements. Instead, he pushed for a lighter-touch regime based mostly on voluntary commitments from industry and existing domestic laws.

“People can expect the United States to weave in AI policy issues in everything we do,” Fick told POLITICO after the Kyoto summit concluded. “The frameworks, the codes, the principles we develop will become the basis for action.”

In dueling meetings with G7 policymakers, tech company executives and other influential figures, Jourová and Fick made their cases.

For the EU’s Jourová, the pitch was simple. Brussels had already marked itself out as the West’s digital police officer, with a flurry of regulations on everything from protecting consumers’ privacy to taming social media.

The summit timetable was packed, with little downtime beyond some snatched cigarette breaks and rushed lunches in the cafeteria. Jourová argued that only Europe could deliver the necessary rigor. The EU, she said, could hit companies with blockbuster fines and ban the most invasive forms of AI — such as social scoring, which are complex algorithms tracking people’s movements, infamously used in China.

“She came with a plan, and that was to convince us Europe’s rules were the only game in town,” said one of the people who met Jourová. Another official from a G7 country said Europe’s digital chief dismissed Washington’s alternative proposal for its lack of binding legislation.

Fick’s counteroffensive relied on America’s undisputed position as the world’s powerhouse of AI development.

A political stalemate on Capitol Hill means no comprehensive legislation from Washington is likely to come anytime soon. But the White House has already made a flurry of announcements. Last July, the Biden administration secured voluntary commitments from leading tech giants like Amazon to make AI safer. Then Biden issued an executive order, announced on Oct. 30, which empowered federal agencies to act.

Fick’s pitch included criticizing Brussels’ legislation for imposing too many regulatory burdens on businesses, compared to Washington’s willingness to allow companies to innovate, according to two individuals who met Fick in Japan.

“The message was clear,” said another Western diplomat who attended the Kyoto G7 summit. “Washington wasn’t going to let Brussels get its way.”

Fine dining before the apocalypse

Back in Europe, efforts to reach a consensus were continuing — in some unlikely surroundings.

A lavish six-course dinner at the Elysée Palace, the French president’s 18th-century official residence in Paris, doesn’t exactly scream high-tech.

But over a five-hour meal of gourmet cuisine and fine wine in November, Emmanuel Macron held court with 40 industry experts, including OpenAI President Greg Brockman and Meta’s chief AI guru, Yann LeCun.

The dinner conversation quickly turned to rulemaking.

Macron is a global powerbroker. France has a large domestic AI industry and the president is personally eager to lead international efforts to set global rules for the technology. Unlike stereotypes of regulation-loving French politicians, Macron remains wary of Brussels’ AI Act, fearing it will hamstring firms like Mistral, an AI startup co-founded by Cedric O, France’s former digital minister.

As Macron listened carefully, his dinner guests laid out their rival proposals. The discussion exposed another key facet of the global AI debate: the clash between those who believe the gravest risks are still many years away, and those who think they are already here. This is the battle at the heart of the entire global debate. It pits those who take a more optimistic approach to AI’s potential, and want to allow as much experimentation as possible, against others who fear it is already too late to protect humanity from serious harm.

OpenAI’s Brockman, one of those who is relaxed about the immediate risks and thinks the focus should be on addressing longer-term threats, told the French president that AI was overwhelmingly a force for good, according to three people who attended the dinner. Any regulation — particularly rules that could hamper the company’s meteoric growth — should focus on long-term threats like AI eventually overriding human control, he added.

Others like Meredith Whittaker, an American technologist also present at the Paris dinner in November, argued that the real-world harms of existing AI — including faulty datasets that entrenched racial and gender biases — required politicians to act now.

“Macron took it all in,” said one of the attendees who, like others present, was granted anonymity to discuss the private meeting. “He wanted people to know France was behind greater regulation, but also that France was also open for business.”

Similar arguments — pitting the need for rules to focus on longer-term risks against an immediate, emergency crackdown — are now playing out across Western capitals.

Scary deepfakes

While national leaders tried to wrap their heads around the problem, tech executives were criss-crossing the planet urging politicians not to over-regulate.

Eric Schmidt, Google’s former chief executive, and LinkedIn founder Reid Hoffman flew between Washington, London and Paris to offer their own take on how to handle AI, according to five individuals familiar with the discussions. An overzealous crackdown, the tech titans argued, would hand authoritarian China a decisive advantage in AI. Many like Schmidt and Hoffman have invested heavily in AI startups.

To reinforce their arguments, tech bosses who want lighter-touch regulation got personal. AI lobbyists showed at least two Western leaders life-like deepfake videos — of the leaders themselves. These AI-generated forgeries are still at the technology’s cutting edge and are often clunky and easy to spot. But the lobbyists used the deepfakes to show the leaders how the tech had the potential to evolve to pose serious risks in the future, according to three individuals briefed on those meetings.

Opponents claim such tactics allow companies like OpenAI and Alphabet to underplay how the technology is harming people right now — for example, by rejecting government benefits claims from minorities and underprivileged people in society.

“Most Americans don’t realize AI is already out there,” said Cory Booker, a Democratic U.S. senator. “I want to make sure that our current laws against discrimination … our current laws affirming protections, are actually being enforced.”

Heading into 2024, those who want a lighter touch appear to be winning, despite EU’s new binding rules on AI.

After Brussels reached a political deal in December on its AI Act, Macron — fresh from his personal AI dinner — sided with German Chancellor Olaf Scholz and Italian Prime Minister Giorgia Meloni to urge fewer controls on the latest forms of the technology. The French president worried such regulation would hamper local champions — including Mistral, the French AI company backed, in part, by Schmidt, Google’s ex-boss.

France remained a holdout until finally giving in to political pressure in early February to support the rules, albeit with major reservations. “We will regulate things that we will no longer produce or invent,” Macron told an audience in Toulouse after securing some last-minute carve-outs for European firms. “This is never a good idea.”

Bioweapons and Big Tech

Tristan Harris thought he had landed a knockout punch.

In a private hearing between U.S. lawmakers and tech experts in September, Harris, a co-founder of the Center for Humane Technology, a nonprofit, described how his engineers had coerced Meta’s latest AI product into committing a terrifying act: the construction of a bioweapon.

This showed that Meta’s approach to artificial intelligence was far too lax and potentially dangerous, Harris argued, according to two officials who attended the meeting.

Mark Zuckerberg’s tech giant favored so-called open-source technology — AI easily accessible to all — with few safeguards against abuse. Such openness, Harris added, would lead to real-world harm, including the spread of AI-generated weapons of mass destruction.

His triumph didn’t last long. Zuckerberg, who was also present at the Capitol Hill hearing, quickly whipped out his phone and found the same bioweapon information via a simple Google search. Zuckerberg’s counterpunch led to a smattering of laughter from the room. It blunted Harris’ accusation that Meta’s open-source AI approach was a threat to humanity.

“It wasn’t the slam dunk Harris thought it would be,” said one of the officials granted anonymity to discuss the meeting.

The Harris-Zuckerberg showdown captures another key question for policymakers: Should AI be limited to a handful of trusted companies or opened up to all comers to exploit?

According to those who favor licensing advanced AI to a select few companies, the immediate risks are too stark to let loose on the wider world. A PowerPoint presentation warning that open-sourced AI models would unleash uncontrollable bioweapons accessible to terrorists has been presented in private to government officials in London, Paris and Washington, according to six policymakers involved in those meetings. POLITICO could not determine which organization was behind these dire predictions.

Microsoft and OpenAI are among the companies that favor restricting the technology to a small number of firms so regulators can build ties with AI innovators.

“A licensing-based approach is the right way forward,” Natasha Crampton, Microsoft’s chief responsible AI officer, told POLITICO. “It allows a close interaction between the developer of the tech and the regulator to really assess the risks.”

Critics of Silicon Valley’s dominance argue Big Tech has a vested interest in shutting out competition. According to open-source advocate Mark Surman, executive director of the Mozilla Foundation, the world’s largest tech companies want to capture the fledgling AI industry for themselves and cut off rival upstarts.

During the fall, Surman toured Western capitals to urge senior U.S., EU and U.K. officials not to limit who could develop next-generation AI models. Unlike other Silicon Valley giants, Zuckerberg’s Meta has sided with people like Surman who want to keep the technology open to all, a point he reinforced with his showdown with Harris over bioweapons in September.

The open approach is winning friends in powerful parts of Washington, although U.S. officials remain divided. In Brussels, policymakers agreed to free almost all open-source AI from the toughest oversight — as long as the technology is not used in outlawed practices, such as wholesale facial recognition. In the U.K., however, officials lean toward restricting the most advanced AI to a handful of countries.

“If we leave AI in the hands of the few, there aren’t enough eyes out there to scrutinize what they do with it,” said Surman.

The future is here

If 2023 was the year AI lobbying burst into the political mainstream, 2024 will decide who wins.

Key decision dates are fast approaching. By the summer, parts of Europe’s AI Act will come into force. The White House’s rival AI executive order is already changing how federal agencies handle the technology. More reforms in Washington are expected by the year’s end.

Other countries like the U.K., Japan and Canada — as well as China — are making their own plans, which include greater oversight of the most advanced AI and efforts to get other countries to adopt their rules.

What’s clear, according to the dozens of AI experts who spoke to POLITICO, is that in 2024, the public’s desire for political action is unlikely to disappear. AI’s impact, especially in a year of mass elections from the U.S. and EU to the Philippines to India — is anything but certain.

“This business about AI’s loss of control is suddenly being taken seriously,” said Max Tegmark, president of the Future of Life Institute, a nonprofit that advocates for greater checks on AI. “People are realizing this isn’t a long-term thing anymore. It’s happening.”

Mark Scott reported from London, Brussels, Paris, Berlin and Washington; Clothilde Goujard from Kyoto, Japan, Vincent Manancourt from London, Gian Volpicelli from Brussels, Mohar Chatterjee and Brendan Bordelon from Washington.

CORRECTION: A previous version of this article misstated which firms favor restricting the technology to a small number of firms. They are Microsoft and OpenAI.