Per the NVIDIA Open Model Agreement, Nemotron-3 Nano Omni is commercially usable with a NOTICE-file attribution requirement and U.S. export-compliance obligations. Multimodal MoE (31B total / 3B active) accepting video, audio, image and text input, with reasoning-style chain-of-thought output. Training data is unusually well-documented (1,395 datasets, modality breakdown, CSAM scanning) — useful for AI Act Article 53 mapping. Vendor jurisdiction remains the US.
Sovereignty
Licence: NVIDIA Open ModelCommercial: Permitted (with attribution)Training data: DisclosedOrigin: USA
Licence facts
Parameters
31B total / ~3B active per token (MoE)
Architecture
Mamba2-Transformer hybrid MoE backbone with C-RADIO v4-H vision encoder and Parakeet speech encoder
Context window
256k tokens
Modalities
Video (mp4, ≤2 min), audio (wav/mp3, ≤1 hour), image (jpg/png), text → text output
Training tokens
~717B tokens across 1,395 datasets (text, audio, image, video)
NVIDIA Open Model Agreement is a custom license, not a standard OSI-approved one — counsel review is required before redistributing or sublicensing the weights.
U.S. export-control and OFAC compliance terms are baked into the license — relevant for EU operators serving sanctioned-jurisdiction users.
U.S. vendor jurisdiction with no published EU data-processing addendum for the weights themselves (irrelevant if self-hosted, but blocks any NVIDIA-managed inference path for sensitive personal data).