Skip to main content

Examples

Use the example patterns in this section to choose the fastest way to evaluate Sentinel or integrate it into an application. Public examples should prove hosted product usage, not internal implementation details.

Which example should you use?

Evaluate Sentinel with a working UI

Use a simple UI example when you want to validate multiple SDK surfaces quickly through one interface.

Build a backend integration

Use a lightweight backend example when you want the simplest server-to-server pattern with very little application framework overhead.

Test the raw gateway

Use curl when you need to validate keys, lanes, headers, and route behavior without any SDK abstraction.

Next.js-style server-side SDK example

Use a server-side Next.js example when you want a hosted SDK playground for:

  • OpenAI SDK via the OpenAI-compatible lane
  • Anthropic SDK via the native Anthropic lane
  • Google GenAI SDK via the native Google lane

The current example surface includes:

  • OpenAI chat, responses, embeddings, images, moderations, speech, and transcriptions
  • Anthropic messages, models, batches, and files
  • Google generate, models, embeddings, and batches

What it demonstrates:

  • how official SDKs authenticate to Sentinel
  • how the lanes differ by provider
  • how to validate broad SDK compatibility from one app

What it does not prove:

  • production deployment posture
  • every file, live, or advanced provider workflow
  • your own application’s auth or tenancy model

Express-style backend example

Use an Express-style example when you want a minimal server-to-server integration shape for curl-based verification and backend SDK usage.

What it demonstrates:

  • straightforward backend integration patterns
  • server-to-server request handling
  • small-surface examples for quick validation

What it does not prove:

  • browser-facing UI behavior
  • advanced control-plane setup
  • production security posture by itself

Direct curl examples

Hosted OpenAI-compatible:

curl -X POST https://gateway.caldorus.com/v1/chat/completions \
-H "Authorization: Bearer ${SP_API_KEY}" \
-H 'Content-Type: application/json' \
-d '{"model":"gpt-4.1-mini","messages":[{"role":"user","content":"Hello"}]}'

Hosted Anthropic native:

curl -X POST https://gateway.caldorus.com/v1/anthropic/v1/messages \
-H "Authorization: Bearer ${SP_API_KEY}" \
-H 'Content-Type: application/json' \
-d '{"model":"claude-sonnet-4-20250514","max_tokens":256,"messages":[{"role":"user","content":"Hello"}]}'

Hosted Google native:

curl -X POST https://gateway.caldorus.com/v1/google/v1beta/models/gemini-2.5-flash:generateContent \
-H "Authorization: Bearer ${SP_API_KEY}" \
-H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"Hello"}]}]}'

Production-safe guidance

  • do not copy development keys into application clients
  • treat file and live workflows as separate compatibility checks
  • prefer example code that matches the exact SDK or lane you will deploy
  • validate the raw gateway path before blaming the SDK