Intelligence Configuration
Every app in Corral has an intelligence configuration that defines how its AI agent reasons and responds. This page covers model selection, system prompts, and the intelligence types available.
Configuring Intelligence
In the admin console, each app has an Intelligence tab where you configure:
- Intelligence type — how the agent processes messages and uses tools
- Model — which LLM the agent uses
- System prompt — the instructions that define the agent’s personality, capabilities, and constraints
Changes to intelligence configuration take effect immediately in the Test environment (Build & Test tab). To push changes to users, publish a new version.
Intelligence Types
Dynamic Prompt (Production)
The primary intelligence type. Supports:
- Context injectors — automatically include workspace context, current date/time, and stored tool results in the agent’s context
- Configurable system prompt — up to 50,000 characters
- Tool access — both prebuilt (all tools available) and dynamic (tools selected based on the incoming message)
- Prompt caching — multi-level cache hints (static, session, dynamic) for efficient model usage
This is what the default Core Assistant uses and what most agents should use.
Chat Completion
A simpler intelligence type for straightforward chat completion without the context injection and caching features of Dynamic Prompt. Suitable for basic conversational agents that don’t need advanced tool orchestration.
Model Selection
Corral ships with GPT-4o as the default platform model, deployed via Azure AI Foundry into the customer’s tenant.
Managing Models
- Navigate to Instance → Models in the admin console
- Sync from Foundry — pulls available model deployments from your AI Foundry catalog
- Enable/disable models — control which models are available for agent configuration
- Per-app model selection — each app can use a different model from the enabled set
Model Resolution
When an agent processes a message, the platform resolves the model through a three-tier lookup:
- Direct service lookup (DI-registered models)
- Platform model mapping (default model name → Foundry deployment)
- Foundry runtime resolution (queries the database, creates a client on demand, caches for reuse)
Model Agnosticism
The platform uses Microsoft.Extensions.AI as the abstraction layer. Any LLM provider that implements IChatClient can be integrated. The default path is Azure AI Foundry, but the architecture supports additional providers as they’re added.
This section is a work in progress.
System Prompts
The system prompt defines what the agent is, how it behaves, and what constraints it operates under. It’s the most important configuration for shaping agent behavior.
Guidelines for Writing System Prompts
- Be specific about the agent’s role — “You are a marketing assistant for [company]. You help with campaign planning, content drafting, and performance analysis.” is better than “You are a helpful assistant.”
- Define boundaries — what should the agent do and not do? What topics should it defer on?
- Include context — domain-specific terminology, company information, workflow descriptions
- Set tone — formal, conversational, technical — whatever matches the use case
- Reference tools — if the agent has specific tools, explain when and how to use them
System Prompt Limits
The Dynamic Prompt intelligence type supports system prompts up to 50,000 characters. Context injectors (workspace context, datetime, tool results) are appended automatically and count toward the model’s context window, not this limit.
Test vs. Published
Intelligence configuration has two states:
- Test — your working draft. Changes are immediately reflected in the Build & Test debug chat.
- Published — the live configuration users interact with. Set when you publish a version.
Always test your intelligence configuration in the Build & Test tab before publishing.