Side-by-side comparison of Hy3-preview (Tencent) and Talkie-1930-13B-IT (Talkie-LM (research)) for self-hosted deployment of the open-weight model. Hy3-preview is rated blocked; Talkie-1930-13B-IT is conditional. They part ways on licence: Hy3-preview is "Tencent Hy Community", Talkie-1930-13B-IT is "Apache 2.0".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. | Conditional Per the published model card, Talkie-1930-13B-IT is an Apache 2.0 instruction-tuned 13B model trained exclusively on pre-1931 English text (260B tokens, sourced from public-domain reference works). The training-data transparency is unusually clean for AI Act Article 53 purposes; the limits are vendor jurisdiction (a US-affiliated research collaboration with no published EU DPA) and the deliberate vintage corpus, which makes the model unsuitable for any task requiring post-1931 factual knowledge. |
| Last reviewed | 2026-04-28 | 2026-04-28 |
| Open-weight | ||
| Licence | Tencent Hy Community | Apache 2.0 |
| Commercial use | EU territory excluded | Unrestricted |
| Training data | Undisclosed | Documented |
| Origin | China | US (research) |
| Performance & pricing? | ||
| Quality index | 42/100 | — |
| Speed | 85 tok/s | — |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.