Per the published model card, Ling-2.6 Flash is an MIT-licensed 104B / 7.4B-active MoE built on a hybrid Lightning-Linear + MLA attention design, positioned for agentic and tool-use workflows. Permissive weights are deployable in EU infrastructure; the headline risks for regulated buyers are vendor jurisdiction (Ant Group's inclusionAI lab, headquartered in China) and the absence of any training-data disclosure in the model card.
Sovereignty
Licence: MITCommercial: UnrestrictedTraining data: UndisclosedOrigin: China
Licence facts
Parameters
104B total / 7.4B active per token (sparse MoE)
Architecture
Hybrid 1:7 Lightning-Linear + MLA attention, upgraded from GQA via incremental training
Use case
Agent workflows, tool use, multi-step planning, high-frequency production workloads
Vendor (Ant Group / inclusionAI) is headquartered in mainland China — non-adequate jurisdiction under the GDPR adequacy framework. Self-hosting in EU infrastructure mitigates the personal-data path; any inclusionAI-hosted endpoint does not.
Training-data composition is not disclosed in the model card — limits AI Act Article 53 transparency posture and copyright-due-diligence work for downstream operators.