Side-by-side comparison of Hy3-preview (Tencent) and Ling-2.6 1T (inclusionAI · China) for self-hosted deployment of the open-weight model. Hy3-preview is rated blocked; Ling-2.6 1T is conditional. They part ways on licence: Hy3-preview is "Tencent Hy Community", Ling-2.6 1T is "MIT".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. | Conditional Per the published model card, Ling-2.6 1T is an MIT-licensed 1-trillion-parameter MoE with a 262k-token context, hybrid MLA + Linear attention and multi-token-prediction support, targeted at production agentic workloads. Permissive weights enable EU self-hosting in principle, though the deployment footprint is non-trivial; vendor jurisdiction (Ant Group, China) and undisclosed training data remain the regulated-buyer blockers. |
| Last reviewed | 2026-04-28 | 2026-05-03 |
| Open-weight | ||
| Licence | Tencent Hy Community | MIT |
| Commercial use | EU territory excluded | Unrestricted |
| Training data | Undisclosed | Undisclosed |
| Origin | China | China |
| Performance & pricing? | ||
| Quality index | 42/100 | — |
| Speed | 85 tok/s | — |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.