Coming Soon:
Free trials & PoCs available in:
Design Partner Program →The Constraining Permissions Framework governs AI actions — which tools it can call, under what conditions, with what approvals.
Traditional security focuses on what systems can read. AI changes the game — because AI can act.
An agent with API access can read your data, sure. But it can also write to databases, send emails, create tickets, trigger workflows. The risk surface isn't just data exposure — it's unauthorized action.
Most platforms ignore this. Corral was built for it.
CPF defines what each AI entity can do — and what it can't. Not just which tools are available, but the conditions under which they're allowed.
| Element | Description |
|---|---|
| Tool Boundaries | Which tools/APIs can this agent access? |
| Action Conditions | Under what circumstances? (user role, time, context) |
| Approval Flows | Which actions need human approval before execution? |
| Uncertainty Handling | What happens when the AI isn't sure? (Ask, escalate, abort) |
| Audit Trail | Every action logged. Full accountability. |
Can read policies, can't access compensation data. Can create tickets, can't modify employee records.
Can read customer history, can issue refunds under $100. Larger refunds need approval.
Can schedule meetings, send emails as draft. Calendar invites on behalf of exec need confirmation.
CPF isn't just a feature — it's what makes AI deployable in regulated environments. Define boundaries before deployment. Audit actions after. Sleep at night.