Side-by-side comparison of Hy3-preview (Tencent) and OLMo 2 32B (AllenAI · USA) for self-hosted deployment of the open-weight model. Hy3-preview is rated blocked; OLMo 2 32B is EU-ready. They part ways on licence: Hy3-preview is "Tencent Hy Community", OLMo 2 32B is "Apache 2.0".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. | EU-ready Fully open model: weights, training data (Dolma 2), training code, checkpoints, and logs all published. Apache 2.0 across the board. Strongest choice when AI Act transparency obligations matter. |
| Last reviewed | 2026-04-28 | 2026-04-15 |
| Open-weight | ||
| Licence | Tencent Hy Community | Apache 2.0 |
| Commercial use | EU territory excluded | Yes |
| Training data | Undisclosed | Disclosed |
| Origin | China | USA |
| Performance & pricing? | ||
| Quality index | 42/100 | 11/100 |
| Speed | 85 tok/s | — |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.