Side-by-side comparison of Gemma 4 26B A4B Instruct (Google DeepMind · United States) and Hy3-preview (Tencent) for self-hosted deployment of the open-weight model. Gemma 4 26B A4B Instruct is rated conditional; Hy3-preview is blocked. They part ways on licence: Gemma 4 26B A4B Instruct is "Apache 2.0", Hy3-preview is "Tencent Hy Community".
| Field | ||
|---|---|---|
| Summary | ||
| Verdict | Conditional Based on published licence terms, Gemma 4 26B A4B ships under pure Apache 2.0 with no prohibited-use carve-outs — a departure from prior Gemma generations. The sparse-MoE architecture (25.2B total / 3.8B active) puts it in an ambiguous zone for EU AI Act GPAI systemic-risk classification, and US origin plus image-input support add transparency obligations that deployers should document. | Blocked Per the Tencent Hy Community License Agreement (Sections 1(l) and 5(c)), the licence's defined Territory excludes the European Union, the United Kingdom and South Korea, and licensees are expressly prohibited from using, distributing or displaying the model or its outputs outside that Territory. Under those terms the weights are not deployable for EU users or workloads, regardless of architecture quality. |
| Last reviewed | 2026-04-17 | 2026-04-28 |
| Open-weight | ||
| Licence | Apache 2.0 | Tencent Hy Community |
| Commercial use | Unrestricted | EU territory excluded |
| Training data | Domain-level summary | Undisclosed |
| Origin | United States | China |
| Performance & pricing? | ||
| Quality index | 27/100 | 42/100 |
| Speed | — | 85 tok/s |
| Blended price | — | — |
| Context window | — | — |
| Evidence | ||
| Sources | ||
No overlapping sources between the two entries.