• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks

OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks

Rebecca Fraser by Rebecca Fraser
September 14, 2024
in News
Reading Time: 1 min read
A A
OpenAI Acknowledges O1 Model Boosts Bio Weapon Risks
ADVERTISEMENT

Select Language:

OpenAI has acknowledged that its latest AI inference model, referred to as “o1,” significantly heightens the risk of artificial intelligence being misused to create biological weapons.

ADVERTISEMENT

On September 14, during a press briefing, the company revealed through its system card—a tool that explains how its AI operates—that the new model presents a “moderate” risk regarding chemical, biological, radiological, and nuclear (CBRN) issues. This rating marks the highest risk assessment ever provided by OpenAI. The company noted that the o1 model “substantially enhances” the capabilities of experts looking to fabricate biological weapons.

Experts warn that should powerful AI software fall into the hands of malicious individuals, the potential for misuse escalates dramatically. One of the advanced features of the o1 model is its step-by-step reasoning ability, which could be leveraged in harmful ways.

Mira Murati, OpenAI’s Chief Technology Officer, told the Financial Times that due to the advanced functionalities of the o1 model, the company approached its public release with extra caution. She also mentioned that the model has undergone rigorous testing by a “red team,” composed of specialists from various scientific fields. According to Murati, the o1 model performs significantly better in overall safety metrics compared to prior versions.

ADVERTISEMENT

For more breaking news, download the Phoenix News app for the latest updates. For in-depth coverage, search for “Phoenix Technology” on WeChat.

ChatGPT ChatGPT Perplexity AI Perplexity Gemini AI Logo Gemini AI Grok AI Logo Grok AI
Google Banner
ADVERTISEMENT
Rebecca Fraser

Rebecca Fraser

Rebecca covers all aspects of Mac and PC technology, including PC gaming and peripherals, at Digital Phablet. Over the previous ten years, she built multiple desktop PCs for gaming and content production, despite her educational background in prosthetics and model-making. Playing video and tabletop games, occasionally broadcasting to everyone's dismay, she enjoys dabbling in digital art and 3D printing.

Related Posts

658135 5680334 updates.jpg
News

NASA Prepares for First Crewed Moon Mission in 50 Years

April 2, 2026
World’s Safest Roads by Country  

(Deaths per 100K Population)

1.  Monaco – 0
Infotainment

Top Countries with the Safest Roads in the World

April 2, 2026
Perplexity AI
How To

How to Use, Disable, or Remove Perplexity AI in Firefox

April 1, 2026
How to Get the Marni Laser Helm in Crimson Desert by Completing & Solving
Gaming

How to Get the Marni Laser Helm in Crimson Desert by Completing & Solving

April 1, 2026
Next Post
OpenAI's Strongest Model O1: Can Handle College Math But Struggles

OpenAI's Strongest Model O1: Can Handle College Math But Struggles

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2026 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2026 Digital Phablet