NewsChinacoding

China's Coding Model Surge: Four Frontier Releases in 12 Days

Four Chinese labs released near-frontier open-weights coding models in under two weeks, at a fraction of the cost of Western alternatives. The AI race just became genuinely multipolar.

AI Learning Hub1 min read(Updated: )

In a 12-day window in late April, four Chinese AI labs independently released open-weights coding models that reached near-frontier capability at a fraction of the cost of Claude Opus 4.7 or GPT-5.5.

The Four Models

  • Z.ai GLM-5.1: From Tsinghua-affiliated Zhipu AI. Strongest on Python and TypeScript benchmarks. Optimized for agentic coding workflows.
  • MiniMax M2.7: From the Shanghai-based lab. Excels at code explanation and documentation generation alongside raw coding capability.
  • Moonshot Kimi K2.6: From Beijing's Moonshot AI. Competitive across multiple programming languages with a 128K context window.
  • DeepSeek V4: From Hangzhou's DeepSeek. The most general-purpose of the four, matching Claude Opus 4.7 on several coding benchmarks at roughly one-tenth the inference cost.

Why It Matters

The four releases share a common thread: they're open-weights, meaning anyone can download, modify, and deploy them. This is a direct challenge to the proprietary model business that OpenAI and Anthropic are built on.

For developers outside the US and Europe, these models change the economics of building AI-powered tools. Inference costs matter more than benchmark scores when you're running code generation at scale. And on inference cost, the Chinese models are competitive, some benchmarks suggest 5-10x cheaper than equivalent Western frontier models.

The Geopolitical Context

China's NDRC (National Development and Reform Commission) recently blocked Meta's $2 billion acquisition of AI agent company Manus, the first state-level prohibition of an inbound AI acquisition. The message is clear: China sees AI sovereignty as a national priority and will block foreign control of domestic AI assets.

Meanwhile, US export controls on advanced chips continue. The four Chinese coding models were reportedly trained on a mix of sanctioned NVIDIA H800s (the export-compliant version of the H100) and domestic Chinese AI chips. If anything, the controls appear to have accelerated China's push toward efficient training techniques that do more with less compute, techniques that make the resulting models cheaper for everyone.