• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » OpenAI Revives “Bubble” For Large Models Again

OpenAI Revives “Bubble” For Large Models Again

Rebecca Fraser by Rebecca Fraser
September 14, 2024
in News
Reading Time: 4 mins read
A A
OpenAI Revives "Bubble" For Large Models Again
ADVERTISEMENT

Select Language:

OpenAI has officially launched its highly anticipated o1 model, marking a significant milestone in AI development this year. The news has positively impacted the stock market, with Nvidia’s shares rising by 10% over two days, underscoring the company’s status as a leading beneficiary of AI advancements.

ADVERTISEMENT

The o1 model is designed to utilize increased computing power for problem-solving, taking time to “think” for several seconds or even longer before providing answers. OpenAI claims that when addressing complex math problems or programming tasks, the o1 model outperforms other existing models on the market.

However, this announcement wasn’t without its controversies. Shortly after CEO Sam Altman’s tweet announcing the full rollout of o1, a user commented, pressing for the release of the new voice features promised back in May. Altman responded, urging users to take a moment to appreciate the “magical intelligence” of the new model before asking for additional features.

The user was referring to the anticipated end-to-end voice functionality of GPT-4o, which had been showcased in a demonstration earlier this year. This functionality was supposed to be available to millions of ChatGPT’s paid users within weeks but has yet to be released.

ADVERTISEMENT

In the past year, OpenAI’s initiatives have often fallen short of expectations; for example, while GPT-4 has been available for over a year, there is still no sign of the next-generation GPT-5. The video model Sora, which was introduced earlier this year, has also not been launched for broad use, with only select industry professionals given access.

This pattern of delays has eroded investor confidence in AI models. In response, some major tech firms and AI companies in China paused the training of foundational models mid-year, reallocating resources towards application development and GPU rental services. They are increasingly wary of the limited progress in technology and have started scaling back investments to seek returns.

Before this week, Nvidia’s market value had plummeted more than 20% since its June highs, and Microsoft saw a 13% decrease, leading to a combined loss of hundreds of billions of dollars in market cap. According to Microsoft’s CFO, the significant investment they’ve made in large models might not yield returns for 15 years or longer.

Research from Sequoia indicates that last year, investments in the AI sector exceeded revenues by over $120 billion, a gap that could widen to $500 billion this year. Yet, aside from Nvidia, very few companies have experienced substantial income growth. This has prompted discussions in the industry about the potential for another bubble collapse if large model capabilities plateau.

Analysts note that such “bubbles” aren’t inherently negative; they often come before transformative technologies become mainstream. The key question is whether these visions can be realized and when. Prolonged failure to deliver can lead to company bankruptcies and could severely impact entire sectors and economies. Conversely, successful realizations result in merely a reflection of technological advancement.

OpenAI’s release of the o1 model may offer a reprieve, temporarily dispelling doubts about the viability of further advancements in large models. This model not only achieves significant improvements in disciplines like programming, mathematics, and science but also outlines a progression path for both OpenAI’s followers and their investors. Previously, computing was largely devoted to the accumulation of knowledge via extensive data training, while o1 reallocates resources to enhance reasoning processes and logical capabilities dramatically.

ADVERTISEMENT

Last year, large model training began encountering limitations inherent in Scaling Laws, where performance improvements slowed with increasing model sizes. OpenAI has introduced two versions of the o1 model for user interaction: o1-preview and o1-mini, with more models in the pipeline.

The name o1 distinguishes it from the previous GPT series due to significant changes in training methodologies. OpenAI’s blog highlights this as a “reasoning model,” contrasting it with the earlier termed “Large Language Models.” Traditional models like GPT primarily operate on a two-step training process: pre-training with massive datasets and fine-tuning for specific knowledge. In contrast, o1 emphasizes reinforcement learning and the “Chain of Thought” reasoning process.

As noted by OpenAI, through reinforcement learning, o1 hones its reasoning pathways, improving its error identification and correction capabilities, as well as its ability to break down complex problems into simpler steps—a considerable boost to the model’s reasoning power.

OpenAI’s o1 employs a methodology akin to that of AlphaGo and AlphaZero, which utilized reinforcement learning for self-play, thereby learning strategies that improved win rates. It also assists in generating data for model training.

On testing, o1 demonstrated a significant advantage over GPT-4o, with a score four times higher in math competition datasets and 5.6 times higher in programming competitions. OpenAI indicates that the current early versions of o1-preview and o1-mini outperform GPT-4o in sophisticated problem-solving in various spheres but still face challenges in areas like personal writing and content editing.

Despite the strengths, the o1 series has shown some weaknesses; the processing time for responses can be significantly longer due to its emphasis on reasoning. For instance, while GPT-4o can quickly list responses, the o1 series takes considerably longer, complicating its practical usage for simple queries.

Looking ahead, the ongoing competition for computational power among AI models remains critical. Experts believe o1’s introduction has revealed a new approach to enhancing large model capabilities, focusing on spending more time reasoning rather than merely increasing training data and model size.

OpenAI has yet to disclose the computational costs of the o1 series, but preliminary assessments suggest that o1 will require much more computational resources compared to the previous GPT models. Monthly subscribers to ChatGPT Plus currently have limited access to o1-preview and o1-mini far fewer times than they could interact with GPT-4o.

With its launch, there is renewed optimism in the tech community, spurring further research and development as companies recalibrate their strategies to compete in the evolving AI landscape. As one Chinese AI researcher succinctly put it, “It’s time to get serious about the work, or we might fall out of the game entirely.”

ChatGPT ChatGPT Perplexity AI Perplexity Gemini AI Logo Gemini AI Grok AI Logo Grok AI
Google Banner
ADVERTISEMENT
Rebecca Fraser

Rebecca Fraser

Rebecca covers all aspects of Mac and PC technology, including PC gaming and peripherals, at Digital Phablet. Over the previous ten years, she built multiple desktop PCs for gaming and content production, despite her educational background in prosthetics and model-making. Playing video and tabletop games, occasionally broadcasting to everyone's dismay, she enjoys dabbling in digital art and 3D printing.

Related Posts

Best Football Players of All Time 

1.  Pelé
2.  Diego Maradona
3.  Lionel Messi
Infotainment

Top 3 Football Legends of All Time

December 9, 2025
Upcoming Google Core Update Expected Soon
Digital Marketing

Upcoming Google Core Update Expected Soon

December 9, 2025
Florida labels CAIR and Muslim Brotherhood as terrorist groups, sparking controversy
News

Florida labels CAIR and Muslim Brotherhood as terrorist groups, sparking controversy

December 9, 2025
The World’s Poorest Countries by GDP per Capita in 2025
Infotainment

Top 10 Poorest Countries by GDP per Capita in 2025

December 9, 2025
Next Post
news4

Amazon's Echo Tap gets the one feature it should have had all along

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2025 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2025 Digital Phablet