FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Marcus: “hey everybody, we might increase the chance that you or a loved one might die in a biological weapons attack, but don’t sweat it … because … we only make it available to paid subscribers.”

BiocommAI Bottom line: Dual use powerful AI technology must be regulatory compliant with laws, and norms. (companies and users cannot harm or kill people, on purpose or by accident.)

1. Who uses dangerous technology, and where, matters, tremendously.

2. Robust KYC and AML would be a good start.

 

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

OpenAI acknowledges new models increase risk of misuse to create bioweapons – Financial Times

Company unveils o1 models that it claims have new reasoning and problem-solving abilities

in San Francisco and in London

13 September 2024

OpenAI’s latest models have “meaningfully” increased the risk that artificial intelligence will be misused to create biological weapons, the company has acknowledged.

The San Francisco-based group announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions.

These advances are seen as a crucial breakthrough in the effort to create artificial general intelligence — machines with human-level cognition. OpenAI’s system card, a tool to explain how the AI operates, said the new models had a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons — the highest risk that OpenAI has ever given for its models.

The company said it meant the technology has “meaningfully improved” the ability of experts to create bioweapons. AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, pose an increased risk of misuse in the hands of bad actors, according to experts.

Yoshua Bengio, a professor of computer science at the University of Montreal and one of the world’s leading AI scientists, said that if OpenAI now represented “medium risk” for chemical and biological weapons “this only reinforces the importance and urgency” of legislation such as a hotly debated bill in California to regulate the sector.

The measure — known as SB 1047 — would require makers of the most costly models to take steps to minimise the risk their models were used to develop bioweapons. As “frontier” AI models advance towards AGI, the “risks will continue to increase if the proper guardrails are missing”, Bengio said. “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”

These warnings came as tech companies including Google, Meta and Anthropic are racing to build and improve sophisticated AI systems, as they seek to create software that can act as “agents” that assist humans in completing tasks and navigating their lives. These AI agents are also seen as potential moneymakers for companies that are battling with the huge costs required to train and run new models.

Mira Murati, OpenAI’s chief technology officer, told the Financial Times that the company was being particularly “cautious” with how it was bringing o1 to the public, because of its advanced capabilities, although the product will be widely accessible via ChatGPT’s paid subscribers and to programmers via an API.

She added the model had been tested by so-called red-teamers — experts in various scientific domains who have tried to break the model — to push its limits. Murati said the current models performed far better on overall safety metrics than its previous ones.

OpenAI said the preview model “is safe to deploy under [its own policies and] rated ‘medium risk’ on [its] cautious scale, because it doesn’t facilitate increased risks beyond what’s already possible with existing resources”.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Humble blog editorial comment… Mathematical proof of safe use – no CBRN – must be required.