BEIJING — DeepSeek's much-anticipated V4 artificial intelligence model is now expected to arrive in late April 2026, according to an internal communication from founder Liang Wenfeng reported this week by AIbase. The confirmation caps months of missed deadlines that have turned the model into a barometer for China's ability to build frontier AI on domestic hardware, a question with enormous implications for the global chip industry and the U.S.-China technology competition.
V4 will be the successor to DeepSeek R1, the reasoning model that stunned the AI community when it launched in January 2025 and demonstrated performance competitive with OpenAI's models at a fraction of the training cost. But where R1 was built on Nvidia GPUs acquired before the tightest U.S. export controls took effect, V4 is being built on something different entirely: Chinese silicon.
Repeated Delays and a Huawei Pivot | V4's Long Road
The V4 was initially rumored for a mid-February launch timed to the Lunar New Year, mirroring the company's splashy rollout of DeepSeek R1 in January 2025. That window came and went. A March target flagged by the Financial Times also passed without a release. On April 3, Reuters, citing The Information, reported that V4 would launch "in the next few weeks" and would run on the latest chips designed by Huawei Technologies.
| Timeline | Status |
|---|---|
January 2025 | DeepSeek R1 launches, built on Nvidia GPUs. Competitive with OpenAI at lower cost. |
Mid-February 2026 (rumored) | V4 expected around Lunar New Year. Deadline missed. |
March 2026 (Financial Times) | Revised target. Deadline missed again. |
April 3, 2026 (Reuters) | V4 reportedly launching "in the next few weeks" on Huawei chips. |
April 11, 2026 (AIbase) | Liang Wenfeng confirms late April via internal communication. |
The Reuters report revealed a critical detail: DeepSeek had spent months rewriting parts of its model stack to work with Huawei's Ascend processors and chips from Cambricon Technologies, while deliberately withholding early access from American chipmakers including Nvidia and AMD. The pivot is not merely technical. It is strategic. By building V4 on Chinese silicon from the ground up, DeepSeek is positioning itself as proof that China's AI ecosystem can operate independently of American hardware.
Huawei 950PR | The Chip Behind V4
The hardware underpinning V4 is Huawei's forthcoming 950PR processor, an evolution of the Ascend line that Huawei has been developing as China's answer to Nvidia's data center GPUs. Major Chinese technology firms, including Alibaba, ByteDance, and Tencent, have placed orders for hundreds of thousands of the 950PR in anticipation of DeepSeek's rollout, according to Reuters. Huawei plans to deliver roughly 750,000 units of the 950PR this year, with mass production expected to begin next month.
| Component | Detail |
|---|
The scale of the 950PR orders signals that China's largest technology companies are betting heavily on a domestic chip ecosystem. If V4 achieves competitive performance on Huawei silicon, it would validate the thesis that U.S. export controls, while disruptive, have not stopped China from building frontier AI. They may have simply redirected the supply chain.
A Test of Two AI Ecosystems
The delay is being watched closely in both Washington and Beijing. U.S. export controls, first imposed in October 2022 and repeatedly expanded, pushed Chinese AI labs to seek alternatives to American silicon. The Trump administration shifted policy in January 2026, replacing blanket denials with case-by-case licensing for certain chip categories, but the damage to trust had already been done. DeepSeek's decision to withhold early access from Nvidia and AMD, rather than the other way around, represents a notable reversal of the traditional dynamic in which Chinese firms competed for allocations of American hardware.
The V4 launch will function as a live benchmark of two competing approaches to AI infrastructure. The American ecosystem, anchored by Nvidia's Blackwell B300 architecture, AMD's MI300X, and Google's TPU v6, offers raw performance advantages that no Chinese chip currently matches on a per-unit basis. But DeepSeek has consistently demonstrated that algorithmic efficiency can compensate for hardware limitations. R1 achieved its breakout performance not by brute-forcing computation but by developing training techniques that extracted more capability per FLOP than competitors thought possible.
If V4 manages to close the gap further, or match frontier Western models, on Huawei silicon, it would raise uncomfortable questions about the long-term efficacy of chip export controls as a tool for maintaining an American lead in AI. Conversely, if V4 falls short, it would suggest that the hardware gap remains real despite China's progress, and that DeepSeek's earlier successes depended more on Nvidia GPUs than the company's algorithmic innovations alone.
Liang Wenfeng | The Quant Trader Behind DeepSeek
Liang Wenfeng, 40, founded DeepSeek in 2023 as a research spin-off from High-Flyer Capital Management, the quantitative hedge fund he built into one of China's most successful. Unlike most Chinese AI founders who come from academic or big-tech backgrounds, Liang's roots in quantitative trading gave him an unusual set of advantages: deep experience with large-scale computing infrastructure, a willingness to invest heavily in speculative research, and a culture of secrecy unusual even by Chinese tech standards.
High-Flyer had quietly accumulated a stockpile of roughly 10,000 Nvidia A100 GPUs before the first round of U.S. export restrictions in October 2022. That inventory gave DeepSeek its initial runway. But the V4 pivot to Huawei silicon suggests that Liang is looking past the Nvidia stockpile and building for a future in which Chinese AI labs cannot count on American hardware at all.
Liang has said little publicly about V4, preferring to let the model's performance speak for itself. The internal communication reported by AIbase is consistent with his typical approach: quiet confirmation to employees rather than a public announcement, with the product itself serving as the marketing event.
The Broader AI Landscape | Timing and Context
V4's late April arrival, if it holds, would land in an AI landscape that looks dramatically different from when the model was first expected. In the Western ecosystem, Anthropic's Claude Mythos Preview has triggered a sell-off in enterprise software stocks by demonstrating that large language models can autonomously write, deploy, and maintain production software. OpenAI is preparing its own next-generation model. Google's Gemini 2.0 is in wide release.
In China, the competitive pressure is equally intense. Alibaba's Qwen team, ByteDance's Doubao, and Baidu's ERNIE have all shipped updates in 2026. But DeepSeek occupies a unique position: it is the only Chinese lab whose models have achieved genuine viral adoption in the West, with R1 generating significant attention from American researchers and developers who were impressed by its reasoning capabilities. V4's reception will depend not just on raw benchmarks but on whether DeepSeek can maintain that crossover appeal while running entirely on non-American hardware.
The infrastructure deals reshaping Western AI, such as Anthropic's multiyear compute lease with CoreWeave, underscore the divergence. Western labs are securing Nvidia GPU capacity through long-term contracts worth billions. DeepSeek is building its own path. Whether that path leads to competitive parity or becomes a cautionary tale about the limits of domestic substitution will be one of the defining questions of the AI industry in the second half of 2026.
What to Watch for in Late April
When V4 does arrive, the AI community will be scrutinizing several dimensions. First, raw performance on standard benchmarks, including coding (HumanEval, SWE-Bench), mathematics (MATH, GSM8K), and reasoning (ARC, GPQA), compared to the latest versions of GPT, Claude, and Gemini. Second, training efficiency: how much compute was required and at what cost, the metric where DeepSeek has historically excelled. Third, inference speed and cost: whether V4 on Huawei silicon can match the throughput that Nvidia-based deployments achieve.
Perhaps most importantly, the AI industry will be watching whether DeepSeek open-sources V4 weights, as it did with R1. That decision, more than any benchmark score, would determine V4's real-world impact. An open-weight model running on Chinese hardware, competitive with closed Western frontier models running on Nvidia GPUs, would reshape the strategic calculus of export controls, AI governance, and the global distribution of AI capability.
Late April is now the date. After months of delays, the barometer is about to get its reading.
Filed under
Discussion
Every comment appears live in our Discord server.
Join to see the full conversation and connect with the community.
Comments sync to our ObjectWire Discord · DeepSeek Founder Says V4 Model to Launch in Late April on Huawei Chips.
Written by
Jack Brennan