Select Language:
Recent reports have revealed that OpenAI is experiencing dissatisfaction with some of NVIDIA’s latest AI chips, particularly regarding their performance during inference tasks. Sources close to the company indicate that OpenAI has been exploring alternative hardware options since last year.
The main issue centers around speed: NVIDIA’s hardware has not met OpenAI’s expectations when handling specific inference workloads, such as software development and inter-system communication, leading to slower response times for ChatGPT users. This has prompted the organization to shift its strategic focus toward chips specifically designed for AI inference. Traditional NVIDIA GPUs, which rely on external memory architectures, seem less suited for the frequent data reads necessary during inference processes.
In contrast, competitors like Anthropic and Google have gained an edge by employing Google’s proprietary tensor processing units (TPUs), which are tailored for inference tasks. These purpose-built chips have demonstrated superior performance in certain scenarios, outperforming general-purpose GPUs in both speed and efficiency.
Despite these developments, OpenAI’s spokesperson clarified that the vast majority of their inference compute clusters still utilize NVIDIA chips. They emphasized that NVIDIA’s hardware continues to offer the best value in terms of performance for inference workloads. NVIDIA, for its part, stated that their clients continue to prefer their products because they provide top-tier performance at scale and cost efficiency.
OpenAI CEO Sam Altman took to social media platform X to dispel rumors, saying, “We value our collaboration with NVIDIA—they produce the world’s best AI chips. Our goal is to be a long-term major customer of theirs. I have no idea where these baseless rumors are coming from.”
Last September, NVIDIA reportedly planned to invest up to $100 billion in OpenAI. If finalized, the deal would have included NVIDIA taking an equity stake in the organization, with OpenAI receiving substantial funds to purchase high-end chips. However, the negotiations—initially expected to conclude within weeks—have been delayed for months.
Sources close to the situation reveal that during this delay, OpenAI has entered into agreements with companies like Sovereign Semiconductor and others, acquiring alternative GPUs that could rival NVIDIA’s offerings. Additionally, OpenAI’s evolving product roadmap has led to changing hardware requirements, complicating ongoing negotiations with NVIDIA.
When asked about the reports suggesting dissatisfaction with NVIDIA chips, NVIDIA CEO Jensen Huang dismissed the claims as “completely unfounded” during a weekend interview. “We are investing heavily and believe in OpenAI. Their work is incredible—they are one of the most influential companies today,” he stated.
While the disputes and negotiations continue, both companies affirm their mutual respect and confidence in the future of AI development.




