Home Products OpenAI acknowledges new models increase risk of misuse to create bioweapons

OpenAI acknowledges new models increase risk of misuse to create bioweapons

by Universalwellnesssystems

Get your free copy of Editor’s Digest

OpenAI acknowledged that its latest model “significantly” increases the risk that artificial intelligence could be used to create biological weapons.

The San Francisco-based company on Thursday unveiled a new model called the o1, touting its ability to reason, solve difficult math problems and answer scientific research questions — advances seen as crucial in the effort to create machines with human-level cognitive abilities, or artificial general intelligence.

OpenAI’s system card, a tool that explains how AI works, says the new model poses a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons, the highest risk OpenAI has given its models to date. The company said the technology represents a “significant improvement” in the ability of experts to create biological weapons.

Experts say AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, is at greater risk of falling into the hands of bad actors and being misused.

Yoshua Bengio, a computer science professor at the University of Montreal and one of the world’s leading AI scientists, said that if OpenAI presents a “moderate risk” with respect to chemical and biological weapons, it “only makes it even more important and urgent” for legislation like a hotly debated bill in California to regulate the sector.

The bill, known as SB 1047, would require manufacturers of the most expensive models to take steps to minimize the risk that their models could be used to develop biological weapons. As “state-of-the-art” AI models evolve toward AGI, “without the proper guardrails, the risks will continue to grow,” Bengio said. “The increasing reasoning capabilities of AI and the use of this skill to deceive are particularly dangerous.”

The warnings come as tech companies including Google, Meta and Anthropic are racing to build and improve advanced AI systems, aiming to create software that can act as “agents” to help humans complete tasks and go about their lives.

These AI agents are also seen as a potential money maker for companies that have previously struggled with the huge costs required to train and run new models.

OpenAI’s chief technology officer, Mira Murati, told the Financial Times that o1 will be made available to a wider audience of programmers through ChatGPT’s paid membership and API, but that the company is being particularly “cautious” about opening it to the public because of its advanced features.

He added that the model was tested by a so-called red team – experts from different scientific disciplines who tried to beat it – to push it to its limits. Murati said the current model performs much better than its predecessor on overall safety metrics.

Additional reporting by George Hammond in San Francisco

You may also like

Leave a Comment

The US Global Health Company is a United States based holistic wellness & lifestyle company, specializing in Financial, Emotional, & Physical Health.  

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Copyright ©️ All rights reserved. | US Global Health