Side-by-side comparison of Hy3-preview (Tencent) and Talkie-1930-13B Base (Talkie-LM (research)) for self-hosted deployment of the open-weight model. Hy3-preview is rated blocked; Talkie-1930-13B Base is conditional. They part ways on licence: Hy3-preview is "Tencent Hy Community", Talkie-1930-13B Base is "Apache 2.0".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. | Conditional Per the published model card, Talkie-1930-13B Base is the pretrained sibling of the Talkie-1930 instruction-tuned release: an Apache 2.0 13B model trained on 260B tokens of pre-1931 English text drawn entirely from public-domain sources. Training-data transparency is unusually clean for AI Act Article 53 purposes; the limits are vendor jurisdiction (a US-affiliated research collaboration with no published EU DPA) and the deliberate vintage corpus, which makes the model unsuitable for any task requiring post-1931 factual knowledge. |
| Last reviewed | 2026-04-28 | 2026-05-03 |
| Open-weight | ||
| Licence | Tencent Hy Community | Apache 2.0 |
| Commercial use | EU territory excluded | Unrestricted |
| Training data | Undisclosed | Documented |
| Origin | China | US (research) |
| Performance & pricing? | ||
| Quality index | 42/100 | — |
| Speed | 85 tok/s | — |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.