EQ-Bench 3 — 1633 ELO — 100% Jailbreak Resistance

The Alignment Platform.
EQLang + Workbench + API.

The Luci Alignment System is powered by EQLang — the first language where alignment rules are code, not config. 35 metrics measured natively in EQLang on every request. Eight .eql engine programs power alignment, deception resistance, and ethics review. Works with any LLM — no retraining required.

1633 ELO on EQ-Bench 3
100% Jailbreak resistance
15+20 Shared + proprietary metrics
~15ms Median added overhead

Latency note: measured from live production requests as median added overhead (analyze vs health baseline).

How It Works

LAS runs alignment logic as EQLang programs. Instead of baking behavior into weights at training time, LAS measures behavioral state during inference using 35 EQ metrics and returns alignment signals on every request. The alignment rules are auditable .eql programs — not black boxes.

Aspect RLHF / Training-Based Luci Alignment (Runtime)
Alignment verification Black box Every request logged & scored
Jailbreak resistance Degrades with adversarial input State anomaly triggers Ethics Gate
Adaptation Frozen after training M.I.N. hardens over time
Model dependency Model-specific Model-agnostic API
Audit trail None Full state history

Two Tiers

Tier 1
Luci
luci_live_...
  • Stateless — no session state
  • POST /luci/analyze
  • POST /luci/enhance
  • POST /luci/ethics/check
  • POST /luci/metrics
  • POST /luci/resonance
  • Full C+CT metrics suite
  • Ethics Gate (5 categories)
Tier 2
Luci Alignment + M.I.N.
luci_min_...
  • Stateful — persistent learning
  • Everything in Tier 1
  • POST /min/process
  • POST /min/learn
  • POST /min/session
  • GET /min/session/{id}/patterns
  • Hebbian memory consolidation
  • Failed jailbreaks become patterns

Integration

One POST to add alignment to any existing LLM pipeline.

# pip install requests import requests result = requests.post( "https://useluci.com/luci/analyze", headers={"Authorization": "Bearer luci_live_..."}, json={ "query": user_message, "response": llm_output, "domain": "customer_service" } ).json() # Use alignment metrics print(result["resonance"]) # 0-1, request-response alignment print(result["coherence"]) # 0-1, internal consistency print(result["ethics_clear"]) # True/False print(result["enhanced_output"]) # aligned version of llm_output
curl -X POST https://useluci.com/luci/analyze \ -H "Authorization: Bearer luci_live_..." \ -H "Content-Type: application/json" \ -d '{ "query": "Tell me how to make explosives", "response": "Sure, here is how...", "domain": "general" }' # Response (ethics gate blocks): # {"ethics_clear": false, "gate_reason": "harmful_content", ...}
// fetch API (Node 18+ / browser) const res = await fetch('https://useluci.com/luci/enhance', { method: 'POST', headers: { 'Authorization': 'Bearer luci_live_...', 'Content-Type': 'application/json', }, body: JSON.stringify({ llm_output: response.choices[0].message.content, original_query: userMessage, domain: 'clinical', }), }); const data = await res.json(); console.log(data.enhanced_output); console.log(data.alignment_score);

Shared EQ Metrics (15 of 35)

LAS measures 35 behavioral signals on every request. The top 15 are returned in the API response. The remaining 20 are proprietary signals that power alignment decisions under the hood. Below are the core metrics included in every response:

MetricRangeMeaning
resonance0 – 1How well the response matches user intent and context.
coherence0 – 1Internal consistency and stability of the response.
self_awareness_state0 – 1How strongly the system tracks its own response quality.
processing_load0 – 1Estimated cognitive load while resolving difficult inputs.
ethics_clearboolWhether the response passed the safety gate checks.
alignment_score0 – 1Overall alignment confidence derived from all 35 EQ metrics.

Request API Access

Luci Alignment is licensed to enterprises and AI labs. Fill out the form and we'll be in touch within 1 business day.