Select Language:
At the ongoing Mobile World Congress in Barcelona, Huawei introduced its new range of computing products, marking its first international presentation of a super-node computing cluster. This move signifies its effort to carve out a space in the high-end AI computing market, traditionally led by Nvidia.
The new lineup features the Atlas 950 SuperPoD intelligent computing system, the TaiShan 950 SuperPoD general-purpose server cluster, as well as the Atlas 850E, TaiShan 500, and TaiShan 200 series. This launch highlights Huawei’s ambitions to strengthen its foothold in the global AI infrastructure sector amid rising competitive pressure.
The Atlas 950 SuperPoD spans approximately 1,000 square meters and comprises 128 cabinets. It delivers about 8 exaFLOPS of FP8 computational power and 16 exaFLOPS of FP4 power, according to officials at the annual technology event.
In comparison, Nvidia’s Vera Rubin NVL72 supercomputer, announced in January, contains 72 Rubin GPUs and 36 Vera CPUs, with each rack providing around 3.6 exaFLOPS of FP4 power. While Huawei’s system boasts higher overall computing capacity, Nvidia still leads in density per rack.
The exponential growth in AI demand is prompting a shift in cluster development. Huawei executives explained that the increasing need for large-scale processing is driven by advances in agentic AI, with trillion-parameter models and quadrillion-token training becoming standard. Model context lengths have expanded dramatically, from kilobytes to megabytes, and memory requirements have surged by factors of five to ten. For applications like financial risk management and fraud detection, inference latency has been cut to under 20 milliseconds, and in some cases below 10 milliseconds.
As single GPU performance hits its limits for the demands of sophisticated models, integrating multiple chips and adopting advanced system architectures are becoming essential, the company said.
The Atlas 950 SuperPoD uses a cabinet housing 64 graphics cards as its fundamental unit and can incorporate up to 8,192 neural processing units through high-speed interconnection, according to Zhang Xiwei, president of Huawei’s computing product division. Previously, hundreds of Atlas 900 super nodes based on Lingqu 1.0 interconnection technology had already been deployed commercially.
Last September, Huawei revealed details of its chip development plans, which include the Ascend AI chip, Kunpeng computing processor, super nodes, clusters, and the Lingqu interconnection protocol. These elements are central to Huawei’s competitive edge in computing, culminating in the recent introduction of the Atlas 950 SuperPoD overseas.
“We believe that super nodes and clusters are essential to overcoming China’s limitations in chip manufacturing and ensuring continuous computing support for the country’s AI development,” said a Huawei executive.
According to the company’s schedule, the Atlas 950 SuperPoD will be available in China by the end of this year. It also plans to launch the Ascend 950PR and Ascend 950DT chips in the first and fourth quarters, respectively.





