agentic control plane for user-scoped AI governance
an open-source control plane that makes AI agents enterprise-ready by enforcing user-scoped identity, policy, and audit trails on every model call.
⚠️ early-stage: gatewaystack is under active development. the first layer,
identifiabl, is live and published (npm i identifiabl). Additional modules (transformabl,validatabl,limitabl,proxyabl,explicabl) are on the roadmap.
until now, most ai systems have been built around model access, not user identity.
access typically happens through a single api key — often shared across a team, department, or entire organization.
that makes it impossible to answer basic questions:
gatewaystack is a user-scoped trust and governance gateway for llm apps.
it lets you:
→ view the gatewaystack github repo
→ read the architecture overview
→ contact reducibl for enterprise deployments
as organizations adopt agentic systems and user-specific ai workflows, identity, policy, and governance become mandatory. shared api keys cannot support enterprise-grade access control or compliance.
a new layer is required — centered around users, not models.
the user-scoped trust and governance gateway
modern AI apps are really three-party systems:
the user — a real human with identity, roles, and permissions
the llm — a model acting on their behalf (chatgpt, claude)
your backend — the trusted data and tools the model needs to access
these three parties all talk to each other, but they don’t share a common, cryptographically verified identity layer.
the gap: The llm knows who the user is (they logged into chatgpt). your backend doesn’t. So it can’t:
without a unifying identity layer, you get:
this instability across user ↔ LLM ↔ backend is what Gatewaystack calls the Three-Party Problem.
it shows up in two directions:
while gatewaystack doesn’t necessarily solve the root cause of each issue, it would have prevented these resources from being accessed without a cryptographically verified user identity on the request.
user-scoped requests act as a safety net for all kinds of common mistakes:
→ xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs
→ DeepSeek database left user data, chat histories exposed for anyone to see
→ How I Reverse Engineered a Billion-Dollar Legal AI Tool and Found 100k+ Confidential Files
→ Leading AI Companies Accidentally Leak Their Passwords and Digital Keys on GitHub - What You Need to Know
“how do i ensure only licensed doctors use medical models, only analysts access financial data, and contractors can’t send sensitive prompts?”
user ↔ backend ↔ llm
without gatewaystack:
app.post('/chat', async (req, res) => {
const { model, prompt } = req.body;
const response = await openai.chat.completions.create({
model, // Anyone can use gpt-4-medical
messages: [{ role: 'user', content: prompt }]
});
res.json(response);
});
with gatewaystack:
app.post('/chat', async (req, res) => {
const userId = req.headers['x-user-id'];
const userRole = req.headers['x-user-role']; // "doctor", "analyst", etc.
const userScopes = req.headers['x-user-scopes']?.split(' ') || [];
// Gateway already enforced: only doctors with medical:write can reach here
const response = await openai.chat.completions.create({
model: req.body.model,
messages: [{ role: 'user', content: req.body.prompt }],
user: userId // OpenAI audit trail
});
res.json(response);
});
gateway policy:
{
"gpt-4-medical": {
"requiredRoles": ["doctor", "physician_assistant"],
"requiredScopes": ["medical:write"]
}
}
the gateway enforces role + scope checks before forwarding to your backend. If a nurse tries to use gpt-4-medical, they get 403 Forbidden.
“how do i let chatgpt read my calendar without exposing everyone’s calendar?”
user ↔ llm ↔ backend
without gatewaystack:
app.get('/calendar', async (_req, res) => {
const events = await getAllEvents(); // Everyone sees everything
res.json(events);
});
with gatewaystack:
app.get('/calendar', async (req, res) => {
const userId = req.headers['x-user-id']; // Verified by gateway
const events = await getUserEvents(userId);
res.json(events);
});
the gateway validates the oauth token, extracts the user identity, and injects X-User-Id — so your backend can safely filter data per-user.
attaching a cryptographically confirmed user identity to a shared request context is the key that makes request level governance possible:
without solving the three-party problem, you can’t:
gatewaystack solves both by binding cryptographic user identity to every AI request:
gatewaystack is composed of modular packages that can run standalone or as a cohesive six-layer pipeline for complete AI governance.
the user-scoped trust and governance gateway sits between applications (or agents) and model providers, ensuring that every model interaction is authenticated, authorized, observable, and governed.
gatewaystack defines this layer.
gatewaystack sits between your application and the llm provider. it receives identity-bound requests (via oidc or apps sdk tokens), applies governance, and then forwards the request to the model.
it provides the foundational primitives every agent ecosystem needs — starting with secure user authentication and expanding into full lifecycle governance.
1. identifiabl — user identity & authentication
every model call must be tied to a real user, tenant, and context. identifiabl verifies identity, handles oidc/apps sdk tokens, and attaches identity metadata to each request.
📦 implementation:
ai-auth-gateway(published)
2. transformabl — content transformation & safety pre-processing
before a request can be validated or routed, its content must be transformed into a safe, structured, model-ready form.
transformabl handles pre-model transformations, including:
this layer ensures that the request entering validatabl and proxyabl is clean, safe, and structured, enabling fine-grained governance and more intelligent routing decisions.
📦 implementation:
ai-content-gateway(roadmap)
3. validatabl — access & policy enforcement
once a request is tied to a user, validatabl ensures it follows your rules:
this is where most governance decisions happen.
📦 implementation:
ai-policy-gateway(roadmap)
4. limitabl — rate limits, quotas, and spend controls
every user, org, or agent needs usage constraints:
limitabl enforces these constraints in two phases — pre-flight checks before execution and usage accounting after the model responds.
📦 implementation:
ai-rate-limit-gateway+ai-cost-gateway(roadmap)
5. proxyabl — in-path routing & execution
proxyabl is the gateway execution layer — the in-path request processor that:
📦 implementation:
ai-routing-gateway(roadmap)
6. explicabl — observability & audit
the control plane must record:
explicabl provides the audit logs, traces, and metadata needed for trust, security, debugging, and compliance.
📦 implementation:
ai-observability-gateway+ai-audit-gateway(roadmap)
gatewaystack is built for teams that need user-scoped, auditable, policy-enforced access to ai models — not just raw model access behind a shared api key.
example use cases:
a platform with 10,000+ doctors needs to ensure every AI-assisted diagnosis is tied to the licensed physician who requested it, with full audit trails for hipaa and internal review.
before gatewaystack: all AI calls run through a shared openai key — impossible to prove which physician made which request.
with gatewaystack:
identifiabl binds every request to a verified physician (user_id, org_id = clinic/hospital)validatabl enforces role:physician and scope:diagnosis:write per toolexplicabl emits immutable audit logs with the physician’s identity on every model callresult: user-bound, tenant-aware, fully auditable AI diagnostics.
a global company rolls out an internal copilot that can search confluence, jira, google drive, and internal apis. employees authenticate with sso (okta / entra / auth0), but the copilot calls the llm with a shared api key.
before gatewaystack: security teams can’t enforce “only finance analysts can run this tool” or audit which employee triggered which action.
with gatewaystack:
identifiabl binds the copilot session to the employee’s SSO identity (sub from okta)validatabl enforces per-role tool access (“legal can see these repos, not those”)limitabl applies per-user rate limits and spend capsexplicabl produces identity-level audit trails for every copilot interactionresult: full identity-level governance without changing the copilot’s business logic.
a saas platform offers AI features across free, pro, and enterprise tiers. today, all AI usage runs through a single openai key per environment — making it impossible to answer “how much did org x spend?” or “which users hit quota?”
before gatewaystack: one big shared key. no tenant-level attribution. cost overruns are invisible until the bill arrives.
with gatewaystack:
identifiabl attaches user_id and org_id to every requestvalidatabl enforces tier-based feature access (plan:free, plan:pro, feature:advanced-rag)limitabl enforces per-tenant quotas and budgetsexplicabl produces per-tenant usage reportsresult: per-tenant accountability without changing app logic.
identity providers (auth0, okta, cognito, entra id)
Handle login and token minting, but stop at the edge of your app. They don’t understand model calls, tools, or which provider a request is going to — and they don’t enforce user identity inside the AI gateway.
api gateways and service meshes (kong, apigee, aws api gateway, istio, envoy)
great at path/method-level auth and rate limiting, but they treat llms like any other http backend. you can build AI governance on top of them (kong plugins, istio policies, lambda authorizers), but it requires significant custom development to replicate what gatewaystack provides out-of-the-box: user-scoped identity normalization, per-tool scope enforcement, pre-flight cost checks, apps sdk / mcp compliance, and AI-specific audit trails.
cloud AI gateways (cloudflare AI gateway, azure openai + api management, vertex AI, bedrock guardrails)
Focus on provider routing, quota, and safety filters at the tenant or API key level. User identity is usually out-of-band or left to the application.
hand-rolled middleware
many teams glue together jwt validation, headers, and logging inside their app or a thin node/go proxy. it works… until you need to support multiple agents, providers, tenants, and audit/regulatory requirements.
gatewaystack is different:
example: kong + openai
to get user-scoped AI governance with kong, you’d need to:
jwt plugin (validate tokens)request-transformer plugin (inject headers)signficant development + ongoing maintenance.
with gatewaystack: configure .env file, deploy, done.
you can still run gatewaystack alongside traditional api gateways — it’s the user-scoped identity and governance slice of your AI stack.
user
→ identifiabl (who is calling?)
→ transformabl (prepare, clean, classify, anonymize)
→ validatabl (is this allowed?)
→ limitabl (how much can they use? pre-flight constraints)
→ proxyabl (where does it go? execute)
→ llm provider (model call)
→ [limitabl] (deduct usage - optional accounting phase)
→ explicabl (what happened?)
→ response
each module intercepts the request, adds or checks metadata, and guarantees that the call is:
identified, transformed, validated, constrained, routed, and audited.
this is the foundation of user-scoped ai.
most teams don’t roll out every module on day one. a common path looks like:
x-user-id / x-org-idover time, gatewaystack becomes the shared trust and governance layer for all of your ai workloads — internal copilots, saas features, and apps sdk / mcp-based agents.
a finance analyst uses an internal copilot to summarize a contract.
every step is user-bound, governed, and auditable.
every request flows from your app through gatewaystack's modules before it reaches an llm provider — identified, transformed, validated, constrained, routed, and audited.
gatewaystack works natively with:
it acts as a drop-in governance layer without requiring changes to your application logic.
for concepts and architecture: continue reading the module white papers linked above.
for implementation and deployment:
→ quickstart guide
→ deployment guide
→ migration guide
for policy examples and reference code:
→ policy examples
→ reference implementations
want to contact us for enterprise deployments?
→ reducibl applied ai studio