MiMo-V2.5 is the omnimodal sibling of MiMo-V2.5-Pro — text, vision, audio, and video on a single sparse-MoE backbone, also under MIT. Same posture as the Pro: deployable weights, but the China origin and stage-level training disclosure mean any EU rollout needs self-hosting plus a deployer-prepared GPAI compliance file, with extra attention to Article 50 transparency for synthetic and biometric outputs.
Sovereignty
Licence: MITCommercial: UnrestrictedTraining data: Categories onlyOrigin: China
Licence facts
Parameters
310B total / 15B active (+ 729M vision, 261M audio encoders)
Architecture
Sparse Mixture-of-Experts with hybrid SWA + Global Attention
Vendor jurisdiction is the People's Republic of China — non-adequate under GDPR Article 45.
Training data is reported by stage and aggregate volume; no dataset list to support AI Act Article 53(1)(d) summary obligations.
Audio and vision encoders introduce additional GDPR special-category and AI Act biometric considerations not covered by text-only deployment models — Article 5 (prohibited practices) and Article 50 (transparency) reviews recommended.
Vendor-hosted API has not published an EU DPA at time of review; self-hosting recommended for personal-data workloads.