A very good read from very respected sources!

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE NEW YORK TIMES. E.U. Agrees on Landmark Artificial Intelligence Rules.

The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence.

“At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. “Technological dominance precedes economic dominance and political dominance,” Jean-Noël Barrot, France’s digital minister, said this week.”

By Adam Satariano, Reporting from London

Dec. 8, 2023

European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.

The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.

European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.

Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.

Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.

The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.

Regulating A.I. gained urgency after last year’s release of ChatGPT, which became a worldwide sensation by demonstrating A.I.’s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.’s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.

At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. “Technological dominance precedes economic dominance and political dominance,” Jean-Noël Barrot, France’s digital minister, said this week.

Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.

A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.

Policymakers agreed to what they called a “risk-based approach” to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.

Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.

The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for “systemic risk,” Mr. Breton said.

The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.

Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.

“The E.U.’s regulatory prowess is under question,” said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. “Without strong enforcement, this deal will have no meaning.”

Learn more:

  • EU agrees ‘historic’ deal with world’s first laws to regulate AI. Agreement between European Parliament and member states will govern artificial intelligence, social media and search engines. Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest. – THE GUARDIAN
  • E.U. reaches deal on landmark AI bill, racing ahead of U.S.. The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance. The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA. – THE WASHINGTON POST
  • European Union agrees to regulate potentially harmful effects of artificial intelligence. European Union lawmakers struck a deal Friday agreeing to one of the world’s first major comprehensive artificial intelligence laws. The landmark legislation, called the AI Act, sets up a regulatory framework to promote the development of AI while addressing the risks associated with the rapidly evolving technology. The legislation bans harmful AI practices “considered to be a clear threatto people’s safety, livelihoods and rights.” The law comes amid growing fears about the disruptive capabilities of artificial intelligence. – CNN
  • AI: EU agrees landmark deal on regulation of artificial intelligence. European Union officials have reached a provisional deal on the world’s first comprehensive laws to regulate the use of artificial intelligence. The US, UK and China are all rushing to publish their own guidelines. – BBC NEWS
  • EU agrees landmark rules on artificial intelligence. New law lays out restrictive regime for emerging technology. – FT
    • EU lawmakers have agreed the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology. Thierry Breton, EU commissioner, confirmed in a post on X that a deal had been reached, calling it a historic agreement. “The EU becomes the very first continent to set clear rules for the use of AI,” he wrote. “The AI Act is much more than a rule book — it’s a launch pad for EU start-ups and researchers to lead the global AI race.” The deal followed years of discussions among member states and members of the European parliament on the ways AI should be curbed to have humanity’s interest at the heart of the legislation. It came after marathon discussions that started on Wednesday this week. Details of the deal were still emerging after the announcement. Breton said legislators agreed on a two-tier approach, with “transparency requirements for all general-purpose AI models (such as ChatGPT)” as well as “stronger requirements for powerful models with systemic impacts” across the EU. Breton said the rules would implement safeguards for the use of AI technology while at the same time avoiding an “excessive burden” for companies. Among the new rules, legislators agreed to strict restrictions on the use of facial recognition technology except for narrowly defined law enforcement exceptions. The legislation also includes bans on the use of AI for “social scoring” — using metrics to establish how upstanding someone is — and AI systems that “manipulate human behaviour to circumvent their free will”. The use of AI to exploit those vulnerable because of their age, disability or economic situation is also banned. Companies that fail to comply with the rules face fines of €35mn or 7 per cent of global revenue. Some tech groups were not pleased. Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector, said: “We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head. “The new requirements — on top of other sweeping new laws like the Data Act — will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers.” MEPs had spent years arguing over their position before negotiations began with member states and the European Commission, the executive body of the EU. All three — national ministries, parliamentarians and the commission — agreed to a final text on Friday night, allowing the legislation to become law. Recommended Marietje Schaake The route to AI regulation is fraught but it’s the only way to avoid harm European companies have expressed their concern that overly restrictive rules on the technology, which is rapidly evolving and gained traction after the popularisation of OpenAI’s ChatGPT, will hamper innovation. In June, dozens of some of the largest European companies, such as France’s Airbus and Germany’s Siemens, said the rules as constituted were too tough to nurture innovation and help local industries. Last month, the UK hosted a summit on AI safety, leading to broad commitments from 28 nations to work together to tackle the existential risks stemming from advanced AI. That event attracted leading tech figures such as OpenAI’s Sam Altman, who has previously been critical of the EU’s plans to regulate the technology. Ursula von der Leyen, the president of the European Commission, the executive body of the EU, praised legislators for the political agreement on the AI rules. She said: “This is a historic moment. The AI Act transposes European values to a new era.” She added: “Until the Act will be fully applicable, we will support businesses and developers to anticipate the new rules. Around 100 companies have already expressed their interest to join our AI Pact, by which they would commit on a voluntary basis to implement key obligations of the Act ahead of the legal deadline.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.