- Install and configure the instrumentation
- Track AI operations (generateText, streamText, etc.)
- Group calls into conversations
Installation
ai >= 3.0.0 < 7
Usage
The instrumentation has two parts: theVercelAiInstrumentation that hooks into the AI SDK to enable telemetry, and the VercelAiSpanProcessor that normalizes span attributes to OpenTelemetry GenAI Semantic Conventions.
instrumentation.ts
generateText, streamText, generateObject, streamObject, embed, embedMany, and rerank calls are automatically instrumented.
example.ts
Configuration
experimental_telemetry option. Per-call settings take priority over the global config.
Conversation tracking
UsewithConversationId() to group multiple AI calls into a single conversation. This sets gen_ai.conversation.id on all spans created within the callback, enabling the Conversations tab in the Monocle AI dashboard.
chat.ts
Spans
The instrumentation does not create spans directly. It enables the Vercel AI SDK’s built-in telemetry and processes those spans through theVercelAiSpanProcessor to normalize attributes.
Pipeline spans
A span is created for each top-level AI call (generateText, streamText, etc.). The span name is normalized to {operation} {functionId} (e.g., invoke_agent weather-app).
Inner LLM call spans
A child span is created for the actual LLM API call. The span name is{operation} {modelId} (e.g., generate_text gpt-4o).
Tool call spans
Each tool invocation creates a span namedexecute_tool {toolName}. The instrumentation also detects Vercel AI SDK v5 tool errors embedded in result content and records them as exceptions on the corresponding span.
Vercel AI-specific attributes
In addition to the standard AI attributes, the Vercel AI instrumentation emits these extra attributes.Input/output attributes (when recording is enabled)
| Attribute | Description |
|---|---|
gen_ai.system_instructions | System prompt text |
gen_ai.input.messages | Input messages (JSON) |
gen_ai.response.text | Generated text |
gen_ai.response.tool_calls | Tool calls made |
gen_ai.response.object | Generated object |
gen_ai.tool.input | Tool arguments |
gen_ai.tool.output | Tool result |
Provider-specific token breakdowns
The processor extracts detailed token metrics from provider metadata when available:| Attribute | Providers |
|---|---|
gen_ai.usage.input_tokens.cached | OpenAI, Anthropic, Bedrock, DeepSeek |
gen_ai.usage.input_tokens.cache_write | Anthropic, Bedrock |
gen_ai.usage.input_tokens.cache_miss | DeepSeek |
gen_ai.usage.output_tokens.reasoning | OpenAI |
Vercel AI SDK native attributes
Remaining Vercel-native attributes are preserved under thevercel.ai.* namespace (e.g., vercel.ai.response.msToFirstChunk, vercel.ai.response.avgOutputTokensPerSecond).