OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks

OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks

OpenAI has acknowledged that its latest AI inference model, referred to as “o1,” significantly heightens the risk of artificial intelligence being misused to create biological weapons.

On September 14, during a press briefing, the company revealed through its system card—a tool that explains how its AI operates—that the new model presents a “moderate” risk regarding chemical, biological, radiological, and nuclear (CBRN) issues. This rating marks the highest risk assessment ever provided by OpenAI. The company noted that the o1 model “substantially enhances” the capabilities of experts looking to fabricate biological weapons.

Experts warn that should powerful AI software fall into the hands of malicious individuals, the potential for misuse escalates dramatically. One of the advanced features of the o1 model is its step-by-step reasoning ability, which could be leveraged in harmful ways.

Mira Murati, OpenAI’s Chief Technology Officer, told the Financial Times that due to the advanced functionalities of the o1 model, the company approached its public release with extra caution. She also mentioned that the model has undergone rigorous testing by a “red team,” composed of specialists from various scientific fields. According to Murati, the o1 model performs significantly better in overall safety metrics compared to prior versions.

For more breaking news, download the Phoenix News app for the latest updates. For in-depth coverage, search for “Phoenix Technology” on WeChat.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.