Skip to main content

1. Deploy Raven

Follow the Self-Hosting guide to deploy Raven on your infrastructure with Docker Compose. Once running, open http://localhost:3000 and create your account.

2. Add a Provider

Navigate to Providers in the dashboard and add your first LLM provider.
1

Select a provider

Choose from supported providers like OpenAI and Anthropic.
2

Enter your API key

Paste your provider’s API key. Raven encrypts and stores it securely.
3

Enable the provider

Toggle the provider to enabled. You can disable it at any time without deleting it.

3. Create a Virtual Key

Go to Keys and create a virtual key. This is the API key your application will use.
rk_live_abc123def456...
Virtual keys support:
  • Rate limits — Set requests per minute (RPM) and requests per day (RPD)
  • Environments — Separate live and test keys
  • Expiration — Set an optional expiration date

4. Make Your First Request

Use the OpenAI-compatible endpoint with your virtual key:
curl -X POST http://localhost:4000/v1/chat/completions \
  -H "Authorization: Bearer rk_live_abc123def456" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello, world!"}]
  }'
Raven is fully compatible with the OpenAI SDK. Just change the baseURL and apiKey — no other code changes needed.

5. Monitor in the Dashboard

Head to Analytics to see your request in real-time — including token usage, cost, latency, and the provider/model used.

Next Steps

Self-Hosting

Deploy Raven on your own infrastructure with Docker.

Core Concepts

Learn the key concepts behind Raven’s architecture.

Add Guardrails

Set up content filters and safety rules.

Routing Rules

Route requests to optimize for cost or latency.