Skip to main content
The Playground is a built-in chat interface in the Raven dashboard that lets you test models, iterate on prompts, and compare provider responses — all without writing any code.

Using the Playground

1

Navigate to Chat

Go to Chat in the dashboard sidebar.
2

Select a Provider and Model

Choose from any of your configured providers and their available models.
3

Start a Conversation

Type a message and send it. The response streams in real-time.
4

Iterate

Continue the conversation, adjust the model, or start a new session.

Features

Provider and Model Selection

Switch between any configured provider and model from a dropdown menu. This makes it easy to compare how different models respond to the same prompt.
GPT-4o, GPT-4o-mini, GPT-4, and other OpenAI models.

Streaming Responses

Responses are streamed in real-time, token by token. You see the response as it is generated, just like in production.

Session Tracking

Each conversation is saved as a session with:
FieldDescription
TitleAuto-generated or custom title
ModelThe model used for the conversation
MessagesFull message history with roles (system, user, assistant)
Token usageInput and output token counts per message
CostCalculated cost for each message

Conversation History

All conversations are automatically saved. Return to any previous session to:
  • Review past responses
  • Continue the conversation
  • Replay with a different model

Token and Cost Tracking

Every message in the playground shows:
  • Input tokens — Tokens sent in the prompt (including conversation history)
  • Output tokens — Tokens generated in the response
  • Cost — Calculated cost based on the model’s pricing
This helps you understand the real cost of your prompts before deploying them to production.

Use Cases

Prompt Testing

Iterate on prompt templates before deploying them to production. Test edge cases and refine system messages.

Model Comparison

Compare how different models respond to the same prompt. Evaluate quality, style, and accuracy side by side.

Debugging

Reproduce specific request patterns to debug issues. Test guardrails and policy behavior interactively.

Demos

Showcase AI capabilities to stakeholders without building a custom interface.

How It Works

The playground uses the same proxy pipeline as production API requests. This means:
  • Guardrails are enforced on playground requests
  • Budgets are checked and deducted
  • Analytics are recorded for playground usage
  • Caching applies to playground requests
Playground requests are real requests that go through the full gateway pipeline. They count against your usage limits and budgets, and they appear in your analytics.

Tips

  • Use system messages to set context before starting a conversation
  • Compare responses by opening multiple sessions with different models
  • Use the playground to test guardrails by sending content that should trigger them
  • Check token counts to estimate production costs before deploying a prompt