Select Language:
NVIDIA stock has seen a remarkable rebound, largely attributed to the performance of Grok-3, a new AI model trained using an astonishing 200,000 GPUs. This model has surpassed the capabilities of DeepSeek and OpenAI, providing evidence that the Scaling Law— which suggests that AI performance continues to improve with increasing training size—remains valid.
Elon Musk’s xAI launched Grok-3, which has revitalized the market’s confidence in NVIDIA, with shares returning to levels seen before the DeepSeek-R1 release. Analysts have pointed out that this significant boost in computational power—an increase of tenfold—indicates that the potential for advancement in AI is far from exhausted, even despite the high costs associated with such extensive pre-training.
The hardware costs associated with Grok-3 have been estimated at around $3 billion. During its pre-training phase, the model utilized ten times more computational power than its predecessor, Grok-2. With 100,000 GPUs, each costing $30,000, the total expenditure has raised eyebrows across the industry.
In various benchmark tests, Grok-3 has outperformed both OpenAI’s and DeepSeek’s models, achieving a remarkable Elo score of 1400 in LMSYS Arena, leaving competitors in its wake. Yet the enormous cost of training Grok-3—set at an estimated 200 million GPU hours—has left some in the industry wondering if DeepSeek truly lost out.
While Grok-3’s capabilities appear impressive, its high development costs raise questions about its long-term sustainability compared to DeepSeek, which has adopted a more cost-effective strategy. Currently, Grok-3 is closed-source with a subscription fee of $30 per month, whereas DeepSeek has embraced an open-source model, attracting a broader community of developers by integrating its technology with prominent applications like WeChat, Baidu, and Tencent.
As users around the world continue to test Grok-3, initial impressions reveal mixed results. For instance, a user found that while the model performed effectively in more complex tasks, it struggled with simpler questions, leading some researchers to conclude that Grok-3 is still undergoing fine-tuning.
Musk’s xAI team is committed to continuous improvements, claiming daily updates to enhance Grok-3’s reliability and performance. This commitment is viewed as a response to the rapidly evolving landscape of AI technology, where companies must adapt swiftly to stay competitive.
AI experts, including Nathan Lambert from the Allen Institute for Artificial Intelligence (Ai2), believe Grok-3 signifies a new stage in AI development. The heightened competition from Grok-3 threatens to accelerate the pace at which tech giants release their models, forcing them to reevaluate their traditional timelines for safety testing and deployment.
The battle for dominance among AI models is intensifying, with Grok-3 demonstrating a possible advantage in scaling but also highlighting the complexities of effective training methodologies.
The implications of this competition are significant, as companies will increasingly prioritize practical advancements over technical efficiencies alone. In a race where the quickest path to deployment may be the key to attracting new users, Grok-3’s ambitious launch underscores the importance of agility in the AI sector. As the industry evolves, the relationship between model performance, cost, and user accessibility will shape the future of artificial intelligence.




