v0 API
The v0-1.0-md
model is designed for building modern web applications. It supports text and image inputs, provides fast streaming responses, and is compatible with the OpenAI Chat Completions API format.
- Framework aware completions: Evaluated on modern stacks like Next.js and Vercel.
- Auto-fix: Identifies and corrects common coding issues during generation.
- Quick edit: Streams inline edits as they’re available.
- OpenAI compatible: Can be used with any tool or SDK that supports OpenAI's API format.
- Multimodal: Supports both text and image inputs (base64-encoded image data).
You can experiment with the v0-1.0-md
model in the AI Playground to test prompts and view responses.
The v0 API is currently in beta and requires a Premium or Team plan with usage-based billing enabled. For details, visit the pricing page.
To start using the v0-1.0-md
model, create an API key on v0.dev.
You can then integrate it using the AI SDK, a TypeScript library designed for working with v0 and other OpenAI-compatible models.
npm install ai @ai-sdk/vercel
import { generateText } from 'ai';
import { vercel } from '@ai-sdk/vercel';
const { text } = await generateText({
model: vercel('v0-1.0-md'),
prompt: 'Create a Next.js AI chatbot with authentication',
});
The v0-1.0-md
model is the default model served by the v0 API.
Capabilities:
- Supports text and image inputs (multimodal)
- Compatible with OpenAI’s Chat Completions format
- Supports function/tool calls
- Streaming responses with low latency
- Optimized for frontend and full-stack web development
POST https://api.v0.dev/v1/chat/completions
This endpoint generates a model response based on a list of messages.
Header | Required | Description |
---|---|---|
Authorization | Yes | Bearer token: Bearer $V0_API_KEY |
Content-Type | Yes | Must be application/json |
Field | Type | Required | Description |
---|---|---|---|
model | string | Yes | Model name. Use "v0-1.0-md" . |
messages | array | Yes | List of message objects forming the conversation. |
stream | boolean | No | If true, the response will be returned as a stream of data chunks. |
tools | array | No | Optional tool definitions (e.g., functions or API calls). |
tool_choice | string or object | No | Specifies which tool to call, if tools are provided. |
Each message object must contain:
Field | Type | Required | Description |
---|---|---|---|
role | string | Yes | One of "user" , "assistant" , or "system" . |
content | string or array | Yes | The message content. Can be a string or array of text/image blocks. |
curl https://api.v0.dev/v1/chat/completions \
-H "Authorization: Bearer $V0_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "v0-1.0-md",
"messages": [
{ "role": "user", "content": "Create a Next.js AI chatbot" }
]
}'
curl https://api.v0.dev/v1/chat/completions \
-H "Authorization: Bearer $V0_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "v0-1.0-md",
"stream": true,
"messages": [
{ "role": "user", "content": "Add login to my Next.js app" }
]
}'
If stream
is false
(default), the response is a JSON object:
{
"id": "v0-123",
"model": "v0-1.0-md",
"object": "chat.completion",
"created": 1715620000,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Here's how to add login to your Next.js app..."
},
"finish_reason": "stop"
}
]
}
If stream
is true
, the server returns a series of data chunks formatted as Server-Sent Events (SSE). Each line begins with data:
followed by a partial delta:
{
"id": "v0-123",
"model": "v0-1.0-md",
"object": "chat.completion.chunk",
"choices": [
{
"delta": {
"role": "assistant",
"content": "Here's how"
},
"index": 0,
"finish_reason": null
}
]
}
Limit | Value |
---|---|
Max messages per day | 200 |
Max context window size | 128,000 tokens |
Max output context size | 32,000 tokens |
To request a higher limit, contact us at [email protected].
By using our API, you agree to our API Terms.
Was this helpful?