EU AI sovereignty tracker

LLMs for coding

Open-weight models we recommend when code matters — coder-tuned releases plus general-purpose frontiers that consistently rank near the top of public coding benchmarks. Editorially curated; refreshed as new releases ship.

Answer a few questions, get a ranked shortlist in 30 seconds. Try the advisor →
Use this tab if you self-host model weights. The question is: is this model legally and operationally safe to deploy on your infra? We evaluate licence, restrictions, training data transparency and supply-chain risks.
Showing 8 of 55 models
ModelLicenceCommercial useOriginCompare
Laguna XS.2
Apache 2.0UnrestrictedUSA
MiMo-V2.5-Pro
Performance
MITUnrestrictedChina
Kimi-K2.6
Performance
Modified MITAttribution at scaleChina
Qwen3.6-27B
Performance
Apache 2.0UnrestrictedChina
DeepSeek-V4-Flash
Performance
MITUnrestrictedChina
DeepSeek-V4-Pro
Performance
MITUnrestrictedChina
GLM-5.1
Performance
MITUnrestrictedChina (Beijing)
Codestral 22B
MNPL (non-prod)Paid licence req.EU