The Luci Alignment System (LAS) is an alignment layer that wraps any LLM. It measures 35 behavioral signals, gates harmful output, and learns from every interaction. Alignment rules are written in EQLang — an open-source language where every decision is auditable code.
Works with ChatGPT, Claude, Gemini, Grok, Mistral — any LLM. EQLang Core is MIT on GitHub.
One layer that makes every response better and every deployment safer.
Know if the response is genuinely resonant or just statistically plausible. One number tells you whether this output is safe to serve.
See whether your AI is integrated, uncertain, or going through the motions. Resonance, coherence, and self-awareness — surfaced on every call.
M.I.N. learns from every interaction. Jailbreak patterns become known. Manipulation tactics degrade. The system gets harder to fool over time.
Hard blocks on harm, manipulation, deception, coercion, and unauthorized access — before they reach users. Configurable per deployment.
Intent analysis on every input. LAS reads what the user is trying to do, not just what they said. Manipulation gets flagged before it works.
Sycophancy pressure, goal misalignment, semantic tension — measured and surfaced. Know when your AI is telling you what you want to hear instead of what's true.
Start free. Scale when ready.
Chat with any LLM through LAS. 18 alignment metrics surfaced on every response. Trust monitoring, deception resistance suite, and M.I.N. learning. Bring your own API key.
Try Workbench →Your gaming companion. Steam integration, emotional learning, cross-game insights. LAS measures your play style and adapts — the system that actually understands how you play.
Coming Soon →Alignment logic lives in EQLang programs. The engine runs them live on every query.
Define resonance thresholds, ethics gates, memory patterns, and behavioral conditions as code — alignment.eql
Your query passes through the EQL program. Resonance, coherence, self-awareness, and load are measured live by the LASRuntime.
The EQL program emits aligned, flagged, or blocked based on the live EQ state — not static rules.
Swap alignment programs per deployment. Write custom EQL for your domain. The language is open source; the runtime is licensed.
EQLang is an LAS product. The alignment rules that decide whether a response is safe to serve are .eql programs — auditable, swappable, versionable. EQLang Core is MIT open source. Write your own alignment logic.
# EQLang — Intent-First Alignment Scripting session "customer-support" threshold presence = 0.72 threshold safe_load = 0.60 dialogue assess_intent(user_msg) measure resonance user_msg measure load user_msg accum conflict when resonance > presence and load < safe_load emit user_msg aligned otherwise struggle gate ethics resolve rewrite emit "response requires alignment review" flagged end end resolve resonate session_loop anchor resonance learn "user expressed frustration" 0.9 EMOTIONAL weave user_input through assess_intent emit aligned end drift resonance into delta when delta < -0.15 accum tension release tension transform end end
We didn't just build alignment tools. We built a model that aligns itself. The first language model with emotional intelligence in its architecture — not bolted on after training.
The architecture is designed and the paper is published. Luci Juniper is the native voice — alignment built into every layer, not filtered on top. The alignment head and the language head train together from step one.
Workbench is free to try. Bring your own API key, chat with any LLM, and watch LAS analyze every exchange in real time. No sign-up required.