• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » Mini 4o Only 8B, o1 Just 300B! Microsoft Leaks GPT Secrets

Mini 4o Only 8B, o1 Just 300B! Microsoft Leaks GPT Secrets

Seok Chen by Seok Chen
January 2, 2025
in AI
Reading Time: 1 min read
A A
artificial flower 5052779 960 720.jpg
ADVERTISEMENT

Select Language:

In an unexpected turn of events, a recently published paper from Microsoft has revealed sensitive core secrets about its GPT models. The study, which was meant to explore various aspects of artificial intelligence, inadvertently let slip crucial details regarding the architecture and functioning of the GPT-4 and its predecessor, GPT-3.

ADVERTISEMENT

According to the leaked information, GPT-4, often referred to as “4o-mini,” features a model size of only 8 billion parameters, while the previous model, GPT-3, is indicated to have around 300 billion parameters. This revelation has sparked considerable interest and speculation within the tech community regarding the efficiency and capabilities of smaller models compared to their larger counterparts.

Experts noted that the disclosure could have significant implications for the ongoing development of AI technology, potentially encouraging researchers to explore more compact, efficient models that can deliver high performance without the extensive resource demands of larger systems. This incident raises concerns about the handling of sensitive information in research papers and the potential for unintended consequences in the rapidly evolving field of artificial intelligence.

As the tech community digs deeper into the implications of this revelation, discussions around model efficiency, ethical AI development, and the future of large-scale AI research are likely to intensify.

ChatGPT Add us on ChatGPT Perplexity AI Add us on Perplexity
ADVERTISEMENT
Seok Chen

Seok Chen

Seok Chen is a mass communication graduate from the City University of Hong Kong.

Related Posts

How to Access Primal Hunt by Completing & Solving in Last Epoch
Gaming

How to Access Primal Hunt by Completing & Solving in Last Epoch

September 4, 2025
Population Peak by Country
Infotainment

Top Countries with the Highest Population Peaks

September 4, 2025
Trumparges tariffs dispute to Supreme Court
News

Trumparges tariffs dispute to Supreme Court

September 4, 2025
How to Set Up Amazon Q Business with QuickSight Using IAM Federation
How To

AWS How-To: Fix Incoming Emails with *.email.connect.aws in Amazon Connect

September 4, 2025
Next Post
duck 2681751 960 720.jpg

Alibaba Cloud Partners With Zero One to Launch Joint Lab

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2025 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2025 Digital Phablet