• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » Li Yanhong’s Internal Talk Exposed: Open Source Models Inefficient

Li Yanhong’s Internal Talk Exposed: Open Source Models Inefficient

Rebecca Fraser by Rebecca Fraser
September 11, 2024
in News
Reading Time: 2 mins read
A A
Li Yanhong's Internal Talk Exposed: Open Source Models Inefficient
ADVERTISEMENT

Select Language:

Recent comments from Baidu’s CEO, Li Yanhong, have shed light on the misconceptions surrounding large-scale AI models. In an internal meeting that has now come to public attention, he expressed concerns about the growing disparities among these models as they evolve. According to Li, while the potential of large models reaches significant heights, there remains a considerable gap between current capabilities and ideal performance. He emphasized the need for continuous and rapid updates to these models to keep pace with user needs and to enhance efficiency while reducing costs.

ADVERTISEMENT

Li addressed the prevailing industry notion that there are no longer barriers separating the capabilities of different large models, countering this perspective with his own. He noted that every new model release typically invites comparisons to GPT-4, often boasting competitive scores or even surpassing it in specific areas. However, Li cautioned that such comparisons do not accurately reflect the overall gaps that exist between leading models.

He elaborated that many models engage in benchmarking post-release to validate their performance, yet while they may appear similar based on scores, practical applications reveal clear differences in capabilities.

Highlighting the multi-dimensional nature of these disparities, Li pointed out that while the industry often emphasizes skills such as understanding, generating text, logic, and memory, it tends to overlook critical factors like cost and inference speed. Some models, although they may achieve similar outcomes, do so at a higher cost and slower speed, making them less viable than their advanced counterparts.

ADVERTISEMENT

Li also discussed a shift in perceptions about open-source models. Historically, the term “open-source” was synonymous with low costs or free access, as exemplified by Linux. However, he argued that in the realm of large models, inference is costly, and open-source models do not come with built-in computational resources, which must be purchased separately. This makes efficient utilization of computational power more challenging.

He stated, “Efficiency-wise, open-source models fall short,” and clarified that closed-source models, referred to as commercial models, benefit from shared development costs and optimized resource use among numerous users. Baidu’s large models, such as Wenxin 3.5 and 4.0, reportedly achieve GPU utilization rates exceeding 90 percent.

In educational and research contexts, he acknowledged the value of open-source models but maintained that in commercial settings—where efficiency and cost-effectiveness are paramount—open-source options do not provide a competitive edge.

Li envisaged the evolution of AI applications, starting with tools like Copilot that assist users, progressing to intelligent agents that exhibit a degree of autonomy in utilizing tools and self-evaluation, ultimately leading to the development of AI workers capable of independently managing diverse tasks.

Despite the growing excitement around intelligent agents, Li noted that a consensus on this direction has not yet been widely established. Companies like Baidu that position intelligent agents as a crucial strategic focus are relatively few.

Moreover, he explained that the barrier to entry for creating intelligent agents is relatively low, as many do not yet know how to leverage large models for practical applications. Building intelligent agents on top of these models offers a straightforward and efficient path to harnessing their capabilities.

ChatGPT Add us on ChatGPT Perplexity AI Add us on Perplexity
Google Banner
ADVERTISEMENT
Rebecca Fraser

Rebecca Fraser

Rebecca covers all aspects of Mac and PC technology, including PC gaming and peripherals, at Digital Phablet. Over the previous ten years, she built multiple desktop PCs for gaming and content production, despite her educational background in prosthetics and model-making. Playing video and tabletop games, occasionally broadcasting to everyone's dismay, she enjoys dabbling in digital art and 3D printing.

Related Posts

How to Set Up Amazon Q Business with QuickSight Using IAM Federation
How To

AWS EC2: How to Fix Connection Issues on Non-SSH Ports

September 14, 2025
AI

Mathematician Tao Solves 18-Month Challenge in 3 Weeks with AI Gauss

September 14, 2025
beach 7611654 960 720.jpg
How To

How To Fix Apple Intelligence Not Working After iOS 18.6.2 Update

September 14, 2025
Honor Him Properly, or Face the Consequences
News

Honor Him Properly, or Face the Consequences

September 14, 2025
Next Post
courtesy reuters

Meta Stole Aussie User Data for AI with No Opt-Out Available

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2025 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2025 Digital Phablet