Side-by-side comparison of Hy3-preview (Tencent) and Ling-2.6 Flash (inclusionAI · China) for self-hosted deployment of the open-weight model. Hy3-preview is rated blocked; Ling-2.6 Flash is conditional. They part ways on licence: Hy3-preview is "Tencent Hy Community", Ling-2.6 Flash is "MIT".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. | Conditional Per the published model card, Ling-2.6 Flash is an MIT-licensed 104B / 7.4B-active MoE built on a hybrid Lightning-Linear + MLA attention design, positioned for agentic and tool-use workflows. Permissive weights are deployable in EU infrastructure; the headline risks for regulated buyers are vendor jurisdiction (Ant Group's inclusionAI lab, headquartered in China) and the absence of any training-data disclosure in the model card. |
| Last reviewed | 2026-04-28 | 2026-05-03 |
| Open-weight | ||
| Licence | Tencent Hy Community | MIT |
| Commercial use | EU territory excluded | Unrestricted |
| Training data | Undisclosed | Undisclosed |
| Origin | China | China |
| Performance & pricing? | ||
| Quality index | 42/100 | — |
| Speed | 85 tok/s | — |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.