About

About LLM Radar

An independent, opinionated register of which AI providers and open-weight models European teams can actually ship.

Why this exists

European CTOs, DPOs and platform leads spend too much time reconciling vendor trust pages, CNIL opinions and legal-blog interpretations. LLM Radar compiles that signal into one place so a procurement decision takes minutes, not weeks.

We do not host your data. We evaluate vendors.

Who runs this

LLM Radar is written and edited by Ali Madjaji. Every entry carries the reviewer's name and the date it was last checked — so you always know who signed off on the verdict and when.

More on what I work on: alimadjaji.com · LinkedIn · reach me at hello@llmradar.eu.

Corrections

Think we got a fact wrong, missed a relevant certification, or are citing a stale version of a DPA? Send us the link:

corrections@llmradar.eu

Right of reply

Vendors who disagree with our assessment can submit a formal response. We publish it verbatim alongside the entry, labelled Vendor response. We do not require you to agree with our verdict — we require that readers see both sides.

vendor-reply@llmradar.eu

Editorial stance

  • Verdicts are editorial judgements, not legal advice.
  • We cite public documentation and date every assessment. Stale entries are flagged.
  • Our methodology is published and applies uniformly to every entry.
  • We accept no vendor payment, sponsorship or affiliate arrangement in exchange for coverage or rating.

What we don't track

LLM Radar is focused on sovereignty — European hosting, jurisdiction, GDPR, AI Act, licensing. We deliberately don't cover:

  • Capability benchmarks — MMLU, HumanEval, reasoning leaderboards. Artificial Analysis already does this well.
  • Pricing — it changes every quarter and is one click away on every vendor's site.
  • Latency / uptime — operational SLAs sit in the contracts readers will negotiate anyway.

If a benchmark result directly informs an AI Act risk assessment, we cite it. Otherwise we stay in our lane.