Quickstart
Use this guide to go from zero to a successful request through Sentinel's hosted gateway.
By the end, you will have:
- one configured provider
- one simple route target
- one Sentinel key
- one successful request visible in Sentinel
Prerequisites
Before you begin, make sure you have:
- access to the Sentinel console at
https://sentinel.caldorus.com - at least one provider credential you can configure in Sentinel
- a terminal or API client for sending the first request
This quickstart follows the hosted Sentinel path. It assumes Caldorus hosts the gateway and console, and that you will configure providers, routing, and keys through the Sentinel console.
1. Sign in to the Sentinel console
Open https://sentinel.caldorus.com and sign in to your Sentinel workspace.
You should be able to access the product areas for Providers, Routing, Keys, and Requests.
2. Configure one provider
Start with a single provider and one clear route target. For the first request, the simplest path is usually an OpenAI-compatible provider lane because it gives the broadest client compatibility with the least setup friction.
In the Sentinel console:
- Open Providers
- Create an active provider configuration
- Store the provider credential using Sentinel-managed secret handling
- Confirm the provider is ready for routing
You should now have one working provider available in your workspace.
3. Confirm a simple route path
Open Routing in the Sentinel console and verify that the model you plan to call resolves to the provider you just configured.
For the first request, keep routing simple:
- one provider
- one known-good model
- no fallback complexity
You should now have one clear route target for an OpenAI-compatible request
such as gpt-4.1-mini.
4. Create or retrieve a Sentinel key
Open Keys in the Sentinel console and create a key for the project and environment you want to test.
Save the generated Sentinel key. It should look like:
sp_live_...
You should now have one valid Sentinel key that can call the hosted gateway.
5. Send the first request
Use the Sentinel key against the hosted gateway:
export SP_API_KEY=sp_live_your_generated_key
curl -i -X POST https://gateway.caldorus.com/v1/chat/completions \
-H "Authorization: Bearer $SP_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-mini",
"messages": [{"role":"user","content":"Hello from Sentinel"}]
}'
A successful response should return model output and include an
x-sp-request-id header.
You can also confirm model visibility through the same Sentinel key:
curl -H "Authorization: Bearer $SP_API_KEY" \
https://gateway.caldorus.com/v1/models
For provider-native behavior, validate the same pattern on the correct lane for that provider.
6. Confirm the request in Sentinel
A successful response is only the first check. Confirm that Sentinel is applying the platform behavior you expect.
Verify that:
- the request authenticated with the correct key
- the request routed to the intended provider and model
- the response includes an
x-sp-request-id - the request appears in the Requests view in Sentinel
- policy and routing signals are visible for operator review
At this point, you should be able to correlate the request in Sentinel, not just in the provider response.
7. Use an example client
After the first successful request, continue with:
- Examples for Next.js, Express, and curl integration patterns
- SDKs for OpenAI, Anthropic, and Google client behavior
Common failure points
Authentication fails at the gateway
- confirm that you are sending a valid Sentinel key, not a provider key
Authentication succeeds but the provider call fails
- verify provider configuration, stored credential, and route target in the console
Model is not found
- confirm the exact model name and the route path configured for the project or environment
Auth looks valid but still fails
- confirm that the key belongs to the intended project and environment
Native SDK calls succeed but curl fails
- check header shape and provider-specific request behavior
Model or generation returns a timeout
- check provider latency and input size
For deeper diagnosis, see Troubleshooting.
What comes next
After the first successful request, continue with: