Chinese data centers told to stick to Nvidia chips, domestic chips not compatible
Date:
Sun, 08 Dec 2024 09:05:00 +0000
Description:
Despite significant growth in China's AI computing capacity, US sanctions
limit access to advanced GPUs.
FULL STORY
US export restrictions have targeted Chinas access to advanced chips ,
fearing that cutting-edge technology could bolster Chinas military capabilities.
These sanctions have forced the nation to ramp up efforts to develop its own GPU technology, with Chinese start-ups already making significant strides in GPU hardware and software development.
However, the shift from globally recognized Nvidia chips to homegrown alternatives involves extensive engineering, slowing down the progress of AI advancements. Despite Chinas progress in this sector, the challenges posed by incompatible systems and technology gaps remain significant.
High cost and complexity
A government-backed think tank in Beijing has therefore suggested that
Chinese data centers continue to use Nvidia chips due to the high costs and complexity involved in shifting to domestic alternatives.
Nvidias A100 and H100 GPUs, widely used for training AI models, were barred from export to China in August 2022, leading the company to create modified versions like the A800 and H800. However, these chips were also banned by Washington in October 2023, leaving China with limited access to the advanced hardware it had relied on.
Despite the rapid development of Chinese GPU start-ups, the think tank
pointed out that transferring AI models from Nvidia hardware to domestic solutions remains challenging due to differences in hardware and software.
The extensive engineering required for such a transition would result in significant costs for data centers, making Nvidia chips more appealing
despite the limitations on availability.
Even with US sanctions, Chinas AI computing power continues to grow at a
rapid pace. As of 2023, Chinas computing capacity, which includes both
central processing units ( CPUs ) and GPUs, increased by 27 per cent year on year, reaching 230 Eflops.
GPU-based computing power, essential for AI model training and inference,
grew even faster, with a 70 per cent increase over the same period. Furthermore, Chinas AI hardware landscape has expanded significantly, with
over 250 internet data centers (IDCs) either completed or under construction
by mid-2023.
These centers are part of a larger push towards new infrastructure, backed by local governments, state-owned telecommunications operators, and private investors. However, this rapid build out has also led to concerns about overcapacity and under-utilization.
If the conditions allow, [data centres] can choose [Nvidias] A100 and H100 high-performance computing units. If the need for computing power is limited, they can also choose H20 or alternative domestic solutions, the China Academy of Information and Communications Technology (CAICT) said in a report on
Chinas computing power development issued on Sunday.
The trend of computing power fragmentation is increasingly severe, with GPU average use rates less than 40 per centThere are big discrepancies on
hardware in IDCs, such as in GPUs, AI accelerators and network structures, which made it harder to manage and dispatch hardware resources to accommodate for differential computing needs of AI tasks, further impeding the use rate, the report added.
Via SCMP
======================================================================
Link to news story:
https://www.techradar.com/pro/Chinese-data-centers-told-to-stick-to-Nvidia-chi ps-domestic-chips-not-compatible
$$
--- SBBSecho 3.20-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)