Side-by-side comparison of DeepSeek V3.2 (DeepSeek · China) and Hy3-preview (Tencent) for self-hosted deployment of the open-weight model. DeepSeek V3.2 is rated conditional; Hy3-preview is blocked. They part ways on licence: DeepSeek V3.2 is "MIT", Hy3-preview is "Tencent Hy Community".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Conditional 685B successor to V3 with DeepSeek Sparse Attention for long context, scalable RL for agentic tasks. Vendor claims parity with GPT-5 (Speciale variant exceeds). MIT licence keeps weights clean; Chinese-origin considerations unchanged. | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. |
| Last reviewed | 2026-04-15 | 2026-04-28 |
| Open-weight | ||
| Licence | MIT | Tencent Hy Community |
| Commercial use | Yes | EU territory excluded |
| Training data | Undisclosed | Undisclosed |
| Origin | China | China |
| Performance & pricing? | ||
| Quality index | 32/100 | 42/100 |
| Speed | 32 tok/s | 85 tok/s |
| Blended price | $0.32/M | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.