Side-by-side comparison of DeepSeek R1 (DeepSeek · China) and Hy3-preview (Tencent) for self-hosted deployment of the open-weight model. DeepSeek R1 is rated conditional; Hy3-preview is blocked. They part ways on licence: DeepSeek R1 is "MIT", Hy3-preview is "Tencent Hy Community".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Conditional Frontier reasoning model at o1-class performance. MIT licence makes weights legally clean. Same Chinese-origin alignment/supply-chain considerations as DeepSeek V3. Distilled Qwen/Llama versions inherit their base licence. | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. |
| Last reviewed | 2026-04-15 | 2026-04-28 |
| Open-weight | ||
| Licence | MIT | Tencent Hy Community |
| Commercial use | Yes | EU territory excluded |
| Training data | Undisclosed | Undisclosed |
| Origin | China | China |
| Performance & pricing? | ||
| Quality index | 27/100 | 42/100 |
| Speed | — | 85 tok/s |
| Blended price | $2.36/M | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.