Laguna XS.2
Poolside · USA
Per the published model card, Laguna XS.2 is an Apache 2.0 33B / 3B-active MoE positioned for local agentic coding, with a 131k-token context and FP8 KV cache aimed at single-machine inference. Permissive license and self-hostable weights make EU-side deployment straightforward; the limits are vendor jurisdiction (San Francisco–headquartered, no published EU DPA for hosted endpoints) and a model card that does not describe the training corpus.
Sovereignty
Licence: Apache 2.0Commercial: UnrestrictedTraining data: UndisclosedOrigin: USA
Licence facts
- Parameters
- 33B total / 3B active per token (MoE)
- Architecture
- 256 experts + 1 shared expert, mixed sliding-window attention (10 global / 30 SWA layers)
- Context window
- 131,072 tokens
- Inference
- FP8 KV cache, optimised for local single-machine deployment
- Use case
- Agentic coding and long-horizon software engineering
- Released
- April 2026
Performance & pricing?
Known risks
- Training-data composition is not disclosed in the model card — limits AI Act Article 53 transparency posture for downstream operators.
- Vendor headquartered in the US with no published EU data-processing addendum — relevant only for users of Poolside's hosted API, not for self-hosted weight deployments.
Reviewed by Ali Madjaji · Last reviewed 2026-05-03· Reviewed 0 days agoSuggest a correction