Geopolitics meets the open-source LLM crowd: Hugging Face is blocked in mainland China, even as domestic open-weights flourish. The block is network-level, not a publishing ban, and it rides on China’s AI rules about what models can generate [1].
Inside China, domestic ecosystems pick up the slack:
• modelscope.cn provides an accessible, China-only hub for open models, partially filling the gap left by blocks on external sites [1].
• GitHub remains restricted; censorship spans API/inference, and many weights are still downloadable to run locally with llama.cpp, including models like DeepSeek—though jailbreaks and filters complicate results [1].
Meanwhile, outside the gate, open-weight innovation keeps marching. LongCat-Flash-Thinking from Meituan is pitched as a standout among open-source models for logic, math, coding, and agent tasks, with performance and efficiency wins highlighted in circulation [2]. The model claims include 64.5% fewer tokens to reach top-tier accuracy on AIME25 with native tool use and Async RL delivering a 3x speedup over Sync frameworks [2].
Regulatory regimes will continue shaping access, development, and the global LLM landscape, with China fostering domestic ecosystems even as external access tightens.
References
Why is Hugging Face blocked in China when so many open‑weight models are released by Chinese companies?
China blocks Hugging Face; mostly network-level ban due to censorship laws, while Chinese firms publish open-weight models domestically within China.
View sourceLongCat-Flash-Thinking
Promotes LongCat-Flash-Thinking as fast, efficient, open-source model; discusses quantization, memory needs, and comparisons to DeepSeek, Qwen, GLM.
View source