Skip to content

enlight-ai Runtime Platform

AI architecture should feel like modelling cognition — not wiring APIs.

enlight sits between your app and your LLM. Requests flow through real DAG workflows — branching, caching, looping back through memory — not a linear chain of prompts. Self-hosted. Source available. Your data stays yours.

1. Expose

One endpoint or a hundred. Each one its own entry point, its own workflow, its own rules.

2. Configure

One model provider or ten. Mix Ollama, Anthropic, OpenAI, Gemini — assign them freely across your endpoints.

3. Design

Wire any workflow to any endpoint. Steps, tools, memory, branching — compose exactly what you need, nothing more.

Your topology. Your rules. No constraints.

Full data control

Self-hosted, source available. Your models, your infra, your data. Nothing leaves your stack.

AI that feels like thinking

Real DAG workflows. Branching, memory loops, conditional exits. Not a pipeline.

Built for developers

No magic, no black box. Every step explicit, readable, testable. You own the code.

Any model, anywhere

Ollama, Anthropic, OpenAI, Gemini. Swap per endpoint, mix freely.

RAG in one step

Your store, your retrieval logic. Retrieve, inject, stream — one step, no framework lock-in.

Follow for more →
Tool calling without magic

The model decides, you execute. Fully explicit — no hidden protocols, every provider, every model.

Follow for more →
Session memory that accumulates

Facts survive restarts. The LLM always knows what the user told it — without you managing state.

Follow for more →
Prompt enrichment before the main call

A silent pre-call extracts language, topic, and intent — the main call gets a richer context automatically.

Follow for more →
Cognitive research pipeline

Four isolated steps: understand, search, reason, respond. Grounded answers, source attribution, cross-turn memory.

Follow for more →
Automated translation pipeline

Translate Markdown to multiple languages, validate section-by-section, open a GitHub PR with inline review comments.

Follow for more →