The Vercel AI SDK works with Raven out of the box using the @ai-sdk/openai-compatible provider. This gives you access to generateText, streamText, useChat, and all other AI SDK features through your self-hosted Raven gateway.
Installation
npm install ai @ai-sdk/openai-compatible
Setup
Create a Raven provider using createOpenAICompatible:
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
const raven = createOpenAICompatible({
name: "raven",
apiKey: "rk_live_abc123...",
baseURL: "http://localhost:4000/v1",
});
Generate Text
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { generateText } from "ai";
const raven = createOpenAICompatible({
name: "raven",
apiKey: "rk_live_abc123...",
baseURL: "http://localhost:4000/v1",
});
const { text } = await generateText({
model: raven("gpt-4o"),
prompt: "Explain quantum computing in one paragraph.",
});
console.log(text);
Use any model from your configured providers:
// OpenAI
const { text } = await generateText({
model: raven("gpt-4o"),
prompt: "Hello!",
});
// Anthropic
const { text } = await generateText({
model: raven("claude-sonnet-4-20250514"),
prompt: "Hello!",
});
Stream Text
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { streamText } from "ai";
const raven = createOpenAICompatible({
name: "raven",
apiKey: "rk_live_abc123...",
baseURL: "http://localhost:4000/v1",
});
const result = streamText({
model: raven("gpt-4o"),
prompt: "Write a short story about a robot.",
});
for await (const text of result.textStream) {
process.stdout.write(text);
}
Next.js API Route
Create an API route that streams responses from Raven:
// app/api/chat/route.ts
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { streamText } from "ai";
const raven = createOpenAICompatible({
name: "raven",
apiKey: process.env.RAVEN_API_KEY!,
baseURL: "http://localhost:4000/v1",
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: raven("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
}
Chat UI with useChat
Pair the API route with the useChat hook for a full chat interface:
// app/page.tsx
"use client";
import { useChat } from "@ai-sdk/react";
import { useState } from "react";
export default function Chat() {
const [input, setInput] = useState("");
const { messages, sendMessage } = useChat();
return (
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role === "user" ? "You" : "AI"}:</strong>
{message.parts.map((part, i) => {
if (part.type === "text") {
return <span key={i}>{part.text}</span>;
}
})}
</div>
))}
<form
onSubmit={(e) => {
e.preventDefault();
sendMessage({ text: input });
setInput("");
}}
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Send a message..."
/>
<button type="submit">Send</button>
</form>
</div>
);
}
Function Calling
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { generateText, tool } from "ai";
import { z } from "zod";
const raven = createOpenAICompatible({
name: "raven",
apiKey: "rk_live_abc123...",
baseURL: "http://localhost:4000/v1",
});
const { text, toolResults } = await generateText({
model: raven("gpt-4o"),
prompt: "What is the weather in Tokyo?",
tools: {
getWeather: tool({
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
}),
execute: async ({ city }) => {
return { temperature: 22, condition: "sunny" };
},
}),
},
});
Environment Variables
Store your Raven virtual key in your environment:
RAVEN_API_KEY=rk_live_abc123...
Then reference it in your provider setup:
const raven = createOpenAICompatible({
name: "raven",
apiKey: process.env.RAVEN_API_KEY!,
baseURL: "http://localhost:4000/v1",
});
Replace http://localhost:4000 with your Raven gateway URL if you deployed it to a different host or port.