A relevant opinion from a respected source… good luck with that.

Study the hard data. Obviously, extreme risk of industry products to the public can never be successfully voluntarily regulated by industry. Regulatory enforcement required.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

BLOOMBERG. Opinion. Tyler Cowen, Columnist. The AI ‘Safety Movement’ Is Dead.

Public pressure to rein in artificial intelligence may be waning, but the work of making these systems less risky is just beginning.

21 May 2024, By Tyler Cowen

Bloomberg Opinion columnist, a professor of economics at George Mason University and host of the Marginal Revolution blog.

May 2024 will be remembered as the month that the AI safety movement died. It will also be remembered as the time when the work of actually making artificial intelligence safer began in earnest.

Some history: In the mid-2000s, a movement known as “effective altruism” made AI safety a top priority, based on fears that highly advanced AI models could vanquish us all or at least cause significant global chaos. Two leading AI companies, Anthropic and OpenAI, set up complicated board structures, with nonprofit elements in the mix, to keep those companies from producing dangerous systems.

The safety movement probably peaked in March 2023 with a petition for a six-month pause in AI development, signed by many luminaries, including specialists in the AI field. As I argued at the time, it was a bad idea, and got nowhere.

Fast forward to the present. Senate Majority Leader Chuck Schumer and his working group on AI have issued a guidance document for federal policy. The plans involve a lot of federal support for the research and development of AI, and a consistent recognition of the national-security importance of the US maintaining its lead in AI. Lawmakers seem to understand that they would rather face the risks of US-based AI systems than have to contend with Chinese developments without a US counterweight. The early history of Covid, when the Chinese government behaved recklessly and nontransparently, has driven this realization home.

No less important is the behavior of the major tech companies themselves. OpenAI, Anthropic, Google and Meta all released major service upgrades this spring. Their new services are smarter, faster, more flexible and more capable. Competition has heated up, and that will spur further innovation.

What does the broader social evidence say about the dangers of AI? From a market standpoint, at least, the world is doing just fine; the markets are hitting new highs. That is not what we would expect if the end of the world were nigh. Investors seem more concerned with inflation, interest rates and conflict in the Middle East.

How about academia? Top scientists are not publishing new models in Science or Nature, two of the better-known journals, suggesting AI will bring about the end of the world. In economics, Daron Acemoglu, arguably the best-published economist of his generation, recently published a model suggesting that AI would boost worldwide GDP by very modest amounts. I think he is underestimating the potential upside, but in any case his paper is hardly a foreboding of doom. And it is notable that this is the same Daron Acemoglu who signed that petition for a six-month pause in AI research just more than a year ago.

As for the philosophers, Nick Bostrom, formerly director of Oxford University’s Future of Humanity Institute, was among the first to formulate and spread AI safety fears, under the broader concept of “existential risk.” In his most recent book, published in March, he has moved to a more optimistic position. Furthermore, due to unknown reasons, the institute has been shut down.

The biggest current obstacles to AI development are the hundredsof pending AI regulatory bills in scores of US states. Many of those bills would, intentionally or not, significantly restrict AI development, as for instance one in California that would require pre-approval for advanced systems. In the past, this kind state-level rush has typically led to federal consolidation, so that overall regulation is coordinated and not too onerous. Whether a good final bill results from that process is uncertain, but Schumer’s project suggests that the federal government is more interested in accelerating AI than hindering it.

Still, to go back to where I started: Is the demise of the AI safety movement cause for panic? Hardly. Complete engineering safety in advance was never a realistic prospect. When humans invented computers, or for that matter the printing press, very few safety issues were worked out beforehand, nor could they have been. Rather, as history progressed, safety problems were addressed on a case-by-case basis. Not every pro-safety effort succeeded — dangerous books were indeed published — but the printing press was nonetheless a boon for humankind.

If we want to do better this time around, and build safe AI systems — and we should — then the best way to get there is simply to forge ahead: Build the systems and offer their services to users, iterating and improving them along the way. Fortunately, that seems now to be just what is happening. The influence of the AI safety movement may have waned, but the opportunity to make AI more safe is only just beginning.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.