
The Deepseek app is seen in this illustration
Credit: Reuters File Photo
DeepSeek has taken the world by storm and with a lot of chatter around the AI model that has given OpenAI's ChatGPT a run for its money, there are many claims that have surfaced as well. One of them is that DeepSeek V3 incurs a training cost of around $6 million.
Now, a SemiAnalysis report has challenged this narrative around its cost, and the independent research company has found it to be much higher.
DeepSeek's total server capital expenditure is a staggering $1.3 billion.
The report, in the breakdown, notes that the $6 mn estimate accounts for GPU pre-training expenses but neglects the significant investments made in research and development, infrastructure, and other essential costs that the company has to bear.
Of the $1.3 bn CapEx calculated, much of it is directed toward operating and maintaining the expensive GPU clusters, which forms the backbone of DeepSeek's computational power.
Reportedly, it has access to around 50,000 Hopper GPUs -- which is not the same as having 50,000 H100s -- the report clarified.
The GPU inventory has H800s, H100s, and country-specific H20s, which are made by Nvidia in light of US export curbs.
Elsewhere, the report also highlights DeepSeek's organisational structure, and unlike some of the larger AI labs, DeepSeek operates its data centres, and employs a streamlined model to help in efficiency and agility. With the AI landscape growing increasingly competitive, this capacity to adapt quickly makes for a vital asset.
A New York Times report said that researchers have found DeepSeek's answers include Chinese propaganda. Taiwan, meanwhile, has banned government agencies from using DeepSeek.
Italy's regulator has blocked the app on data protection, while the Dutch privacy watchdog has launched a probe into DeepSeek's data collection practices.
In India, cloud service providers Ola Krutrim and AceCloud have started offering services of DeepSeek.