Per the published model card, Ling-2.6 1T is an MIT-licensed 1-trillion-parameter MoE with a 262k-token context, hybrid MLA + Linear attention and multi-token-prediction support, targeted at production agentic workloads. Permissive weights enable EU self-hosting in principle, though the deployment footprint is non-trivial; vendor jurisdiction (Ant Group, China) and undisclosed training data remain the regulated-buyer blockers.
Sovereignty
Licence: MITCommercial: UnrestrictedTraining data: UndisclosedOrigin: China
Licence facts
Parameters
1T total (active count not disclosed; sparse MoE)
Architecture
Hybrid MLA (Multi-head Latent Attention) + Linear Attention, multi-token-prediction support
Vendor (Ant Group / inclusionAI) is headquartered in mainland China — non-adequate jurisdiction under the GDPR adequacy framework. Self-hosting in EU infrastructure mitigates the personal-data path; any inclusionAI-hosted endpoint does not.
Training-data composition is not disclosed in the model card — limits AI Act Article 53 transparency posture and copyright-due-diligence work for downstream operators.
1T-parameter footprint is operationally heavy: realistic only for buyers with a multi-node inference cluster or willing to depend on a hosted endpoint.