Corral
Learn > Documentation

Security Architecture

Corral’s security model is architectural, not policy-based. Data stays in your tenant because there’s no path for it to leave. Access is constrained because the system is built that way, not because a policy says so.


Core Security Properties

Zero Data Egress

Your data does not leave your Azure tenant. There is no data pipeline, no replication, no analytics feed, and no runtime phone-home from your resources to Corral’s infrastructure.

This is verifiable. Inspect your network logs, Azure Activity Log, and resource configurations. You’ll find no outbound data flows to Corral.

Tenant Isolation

Each Corral deployment is a complete, independent instance in the customer’s own Azure subscription. There is no shared infrastructure between customers. No multi-tenant data stores. No shared compute.

Identity-Native

Authentication is Entra ID — your existing organizational identity provider. Users sign in with their organizational accounts. No separate credentials to manage, no identity synchronization, no shadow user directory.

Managed Identity Throughout

All inter-service communication within your Corral deployment uses Azure Managed Identity. No service account passwords, no API keys stored in configuration files. The managed identity is scoped to the managed resource group.


The Layered Security Model

AI agents introduce a new type of security surface. An agent that can read emails, call tools, and generate content has an attack surface at every layer:

Model layer — the LLM itself can be influenced by its inputs Tool layer — tool calls propagate data (potentially untrusted) between systems Data layer — the agent reads from multiple trust boundaries Surface layer — the agent’s output reaches users and external systems Infrastructure layer — the compute, network, and storage that everything runs on

Corral provides security controls at each layer because a gap at any single layer compromises the whole system:

LayerSecurity Control
ModelModel-agnostic design — swap providers without changing security posture. Models run in your AI Foundry, not a shared service.
ToolTool access is per-app, per-environment. Tools are explicitly enabled, not ambient. The CRF (below) governs what tools can be called based on conversation state.
DataFile system scoping (workspace, session, config, temp). Permission model controls who can access which apps and data.
SurfaceMultiple channels with independent configuration. Widget domain whitelisting. Role-based admin access.
InfrastructureOn-tenant deployment. Managed identity. Key Vault for secrets. Azure-native networking and firewall rules.

The Cumulative Restrictions Framework (CRF)

The CRF is Corral’s approach to a problem unique to AI agents: once an agent has read untrusted content, its reasoning may be compromised.

Traditional security asks: “what can this agent access?” The CRF also asks: “what should this agent be allowed to do, given what it has already read?”

How It Works

The CRF tracks the trust level of a conversation based on what the agent has ingested:

StateConditionWhat’s Allowed
CleanFresh conversation, no external data accessedFull egress — agent can send emails, write to external systems
InternalHas accessed internal/trusted dataEgress to internal systems allowed; egress to public systems requires review
TaintedHas accessed untrusted data (web, external email)All egress requires human review via the fork mechanism

Trust levels only escalate within a conversation — they never de-escalate. Once tainted, a conversation stays tainted.

The Fork Mechanism

When a tainted conversation tries to perform an outbound action (send an email, push to a repository, call an external API), the system:

  1. Serializes the intended action into a human-readable description
  2. Presents it to the user for review
  3. The user can edit, approve, reject, or report

The key insight: prompt injection attacks hide instructions in content. When the action is serialized and shown to the user, hidden instructions become visible text. The user reviews what the agent actually intends to do, not what hidden instructions told it to do.

Why Conversation-Level, Not Per-Variable

Some approaches track taint per data variable — each piece of data carries its own provenance. The CRF tracks taint at the conversation level because the taint isn’t on the data, it’s on the agent’s reasoning. Once an agent has processed untrusted content, you can’t trust any of its decisions — including its decisions about which data is clean and which isn’t.

Current Status

The CRF is being built in phases:

  • Phase 1 (Complete): Core vocabulary — TrustLevel, ToolDirection, ToolClassification, and the pure-function rules engine. Tested with 100% branch coverage.
  • Phase 2 (In Progress): Conversation taint tracking — persisting trust levels, emitting taint transition events, frontend indicators.
  • Phases 3–5 (Planned): Soft enforcement (logging), hard enforcement (blocking), and the fork mechanism UI.

The vocabulary and rules are in production code today. Enforcement is being rolled out incrementally — observation before blocking, with feature flags for customer control.


What Corral Does NOT Claim

  • We have not “solved” prompt injection. No one has. The CRF is a defense-in-depth approach that converts hidden attacks into visible, reviewable actions.
  • Content filtering and PII detection are planned, not yet implemented. These are on the roadmap.
  • The fork mechanism UI is not yet live. The conceptual model and rules engine are built; the user-facing fork review interface is coming in later phases.