FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Leading A.I. firms, including Microsoft, Google and OpenAI, will agree to voluntary guidelines that may be a first step to wider regulations.

In a potentially significant victory for the Biden administration, top players in artificial intelligence, including Microsoft, Google and OpenAI, are to meet at the White House on Friday to pledge to build safeguards into their development of a technology that has captivated Wall Street and rattled many world leaders.

The commitments are voluntary, but industry watchers see the move as an important first step toward protecting consumers and businesses.

The White House wants the companies to commit to “responsible” development. Concerns are rife that A.I. could turbocharge misinformation and cybercrime, and pose a national security risk. There also are fears that the technology will steal jobs and that unethical players will misappropriate the intellectual property of companies, artists and ordinary people in the commercialization of their generative A.I. tools. Such complaints have already led to a string of lawsuits.

The White House said it would work with overseas allies, including Britain, Germany, Japan and South Korea, to develop common groundwork on A.I. governance. It comes as China is developing its own guidelines for A.I. and chatbots, which have exploded in use in recent months.

Congress has been slow to legislate A.I., despite calls from the industry for regulation. The Biden administration, which hosted tech executives at the White House in the spring for a “frank discussion” about the future of A.I., said it was working on “an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.”

An industry effort is probably the quickest way to see an impact. The safeguards include pledges not to commercially release A.I. products until they’ve undergone safety tests; to institute a watermarking system to minimize fraud and deception; to publicly disclose the capabilities and limitations of the tools; and a commitment to research the societal risks of the technology.

Such commitments are significant, Mikko Hypponen, the chief research officer at the software company WithSecure and a cybercrime adviser to Europol, told DealBook. Among his chief concerns are malware writers using generative A.I. to develop powerful hacking tools. These developments are inevitable, he said, but until rules are established, the risks can be minimized by industry cooperation. Otherwise, the corporate focus will be on a competition for market share and “a race is a dangerous thing when you are trying to do things safely and securely,” he said.

Those scheduled to attend the White House session include: Brad Smith of Microsoft, Nick Clegg from Meta, Kent Walker of Google, Greg Brockman of OpenAI, Adam Selipsky of Amazon Web Services, Dario Amodei of Anthropic and Mustafa Suleyman of Inflection AI.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.