Select Language:
This week, the National Supercomputing Internet platform announced the launch of the QwQ-32B API interface service based on Alibaba’s open-source reasoning model. Users can now access one million tokens for free upon registration.
The platform operates on domestically developed deep computing acceleration cards and a national integrated computing network, allowing a vast number of users to conveniently utilize the QwQ-32B and other domestic open-source large models like DeepSeek-R1.
Users do not need to download any local software; they can quickly develop the QwQ-32B model by simply activating the Notebook feature on the platform or by incorporating proprietary data for further private deployment.
The QwQ-32B model, launched by Alibaba’s Qwen team, is built on the Qwen2.5-32B architecture enhanced with reinforcement learning. According to official benchmark results, the QwQ-32B demonstrates comparable performance to DeepSeek-R1 on the AIME24 assessment, which tests mathematical skills, and the LiveCodeBench for coding capabilities, significantly outperforming the o1-mini and equivalent distilled models.
To use the QwQ-32B API service, users need to follow these steps:
1. Search for QwQ-32B on the National Supercomputing Internet marketplace, select the “QwQ-32B API Interface Service” product, and finalize the purchase.
2. After purchasing, click “Use” to navigate to the API information page.
3. Choose your preferred access method: using HTTP tools like Postman or Apifox; accessing with Python code; or directly through the terminal.
In addition to the QwQ-32B model API, the platform has also recently introduced the DeepSeek-R1 family of services and advanced API deployment for models as large as 671 billion parameters.
On March 6, Alibaba launched the latest open-source model, QwQ-32B, which is smaller in size than DeepSeek yet offers comparable performance to the leading open-source reasoning models globally. Through extensive reinforcement learning, the QwQ-32B has achieved significant strides in mathematical, coding, and general capabilities, rivaling DeepSeek-R1 in overall performance.
Furthermore, the QwQ-32B offers a notable reduction in deployment costs, making it feasible for local deployment on consumer-grade graphics cards. The model is available worldwide under a permissive Apache 2.0 license, allowing anyone to download and use it commercially without restrictions.