Per the published LICENSE file, MiMo-V2.5-Pro ships under MIT, so the weights themselves carry no commercial restriction. The remaining EU-readiness gaps are the China-based vendor and the corpus disclosure that names training-stage categories (text pre-training, multimodal pre-training, SFT, RL, MOPD) without listing datasets — both should be addressed in any GPAI deployer file before regulated use.
Sovereignty
Licence: MITCommercial: UnrestrictedTraining data: Categories onlyOrigin: China
Licence facts
Parameters
1.02T total / 42B active
Architecture
Mixture-of-Experts (384 routed, top-8) with hybrid Sliding Window + Global Attention
Vendor jurisdiction is the People's Republic of China — non-adequate under GDPR Article 45.
Training data is described by stage and aggregate token count, not by dataset list; deployer-side AI Act Article 53(1)(d) work cannot rely on the model card alone.
Hybrid attention with 128-token sliding window may surface unexpected behaviour on documents longer than the dense-attention windows used at evaluation time — worth validating on the deployer's own long-context corpus.