Your First 30 Minutes
You’ve deployed Corral from Azure Marketplace and completed admin consent. Here’s what to do next.
Step 1: Log In to the Admin Console
Navigate to the URL provided after deployment. Sign in with your organizational Entra ID account. You’ll land in the admin console as the first user with full Management permissions.
You’ll see your workspace dashboard with two auto-created assistants:
- Core Assistant — your primary AI assistant, configured with GPT-4o, persistent conversations, and access to the full tool suite (file system, code execution, and more)
- Sub-Assistant — a secondary assistant designed for delegated, domain-specific work
These are ready to use immediately.
Step 2: Talk to Your Assistant
Click into the Core Assistant and open the Build & Test tab. This is a debug chat interface where you can interact with the assistant and see exactly what’s happening — which tools it calls, what model it uses, and how it reasons.
Try a few things:
- Ask a question — the assistant responds using GPT-4o (or whichever model your deployment configured)
- Ask it to write a file — it creates a file in the workspace file system you can inspect
- Ask it to run code — it executes in a sandboxed environment and returns results
This is the same assistant your users will interact with in the Hub, but with debug visibility.
Step 3: Open the Hub
The Hub is the end-user interface. Navigate to it (the URL is separate from the admin console) and sign in with the same account.
You’ll see:
- A chat interface with the Core Assistant
- Conversation history in the sidebar
- The ability to create sub-assistants for domain-specific work
This is what your team members will see when they log in.
Step 4: Explore the Admin Console
Back in the admin console, each assistant (app) has tabs for:
| Tab | What It Does |
|---|---|
| Build & Test | Debug chat — interact with the assistant and see tool calls, model responses |
| Intelligence | Configure the model, system prompt, and intelligence type |
| Connections | Add MCP servers and OpenAPI integrations |
| Channels | Enable Teams, DirectLine, embedded widget |
| Analytics | Conversation counts, token usage, tool call statistics, usage patterns |
| Versions | Publish snapshots and version history |
Step 5: Configure a Model (Optional)
Your deployment ships with a default model (GPT-4o via Azure AI Foundry). To see available models or change the default:
- Go to Instance → Models in the admin console
- Sync from the AI Foundry catalog to see available deployments
- Enable or disable models as needed
- Configure which model each assistant uses in its Intelligence tab
Step 6: Add Your First Integration (Optional)
If you want to connect the assistant to an external system:
MCP Server: Go to the assistant’s Connections tab → MCP. Add the server URL and optional authentication. Tools from the MCP server become available to the assistant automatically.
OpenAPI: Go to Connections → OpenAPI. Provide a spec URL. Corral imports the API definition and generates callable tools.
See Tools: MCP Connections → and Tools: OpenAPI Connections → for details.
Step 7: Invite Your Team
This section is a work in progress.
What’s Next
You have a working AI platform in your cloud. From here:
- Customize the Core Assistant: Change the system prompt, add tools, connect data sources. See Intelligence Configuration →.
- Create Sub-Assistants: Build domain-specific assistants for marketing, engineering, support, etc. See Assistants & Sub-Assistants →.
- Enable Teams: Let your team interact with assistants directly from Microsoft Teams. See Channels →.
- Embed on your site: Put a chat widget on your website or internal portal. See The Embedded Widget →.
- Understand the architecture: Learn how on-tenant deployment works and what it means for your security posture. See How On-Tenant Works →.