Observability
Why this page exists
A docs site looks simple from the outside, but in production it has a lot of moving parts:
- The AI assistant streams answers from a model and calls tools — calls fail, latencies spike, costs grow.
- The MCP server is hit by external agents (Claude, Cursor, ChatGPT…) and you don't see what they ask for, or whether they get a useful answer.
- Pages may 404 because of a broken link in another doc.
- The sitemap can quietly miss collections after a content refactor.
Out of the box Docus captures all of this for you and ships nothing externally — events are pretty-printed in your terminal during dev, and silent in production. To get a production view, point Docus at an observability backend and you're done.
What's a drain?
A drain is just a destination for your logs. You set one environment variable, and every event that happens on your site (a page request, an AI tool call, a 500 error) is forwarded to that destination. Common destinations:
- Axiom — searchable log storage with dashboards.
- OTLP — the OpenTelemetry standard, supported by Grafana Cloud, Honeycomb, New Relic, Datadog, self-hosted collectors…
- Sentry — error tracking specifically.
- Datadog, HyperDX, Better Stack, PostHog — also supported.
Without a drain, events still get pretty-printed in dev and you can tail your server output in production. With a drain, you get search, dashboards, alerts.
Quick start
Pick a backend, add the env var, deploy. That's it.
NUXT_AXIOM_TOKEN=xaat-...
NUXT_AXIOM_DATASET=docus-logs
Create a token and dataset at app.axiom.co. Events show up within seconds.
NUXT_OTLP_ENDPOINT=https://otel.example.com:4318
Works with any OpenTelemetry collector — Grafana Cloud, Honeycomb, otel-collector, etc.
NUXT_SENTRY_DSN=https://...@sentry.io/...
Errors become Sentry issues. Non-error events become breadcrumbs on the active transaction.
NUXT_DD_API_KEY=...
NUXT_DD_SITE=datadoghq.eu
NUXT_HYPERDX_API_KEY, NUXT_BETTER_STACK_SOURCE_TOKEN, and NUXT_POSTHOG_API_KEY. The full list is in the evlog adapters reference.waitUntil so flushing happens after the response is sent.What you'll see in your drain
MCP server calls
Each call to your /mcp endpoint records the transport, session, JSON-RPC method and tool name automatically — these are added by @nuxtjs/mcp-toolkit whenever evlog/nuxt is registered:
{
"service": "docus/mcp",
"request": { "method": "POST", "path": "/mcp" },
"mcp": {
"transport": "streamable-http",
"route": "/mcp",
"method": "tools/call",
"tool": "get-page",
"session_id": "session_abc",
"request_id": 12
},
"content": {
"path": "/en/getting-started/installation",
"title": "Installation",
"contentLength": 2148
},
"response": { "status": 200, "duration": 84 }
}
If you set up authentication on the MCP server, user.id / user.email and session.id are also auto-tagged from event.context.user and event.context.session. See the mcp-toolkit logging docs for the full schema.
This tells you which sessions are connected, which pages they fetch, whether they hit 404s, and how fast you respond. Useful when you want to know if your MCP server is being used at all, or to find content gaps (404s coming from agents tell you what's missing).
AI assistant conversations
Each conversation captures the full AI run — model, tokens, tool calls, latency:
{
"request": { "path": "/__docus__/assistant" },
"assistant": {
"siteName": "Docus",
"model": "google/gemini-3-flash",
"tools": ["list-pages", "get-page"]
},
"ai": {
"model": "google/gemini-3-flash",
"inputTokens": 384,
"outputTokens": 217,
"totalTokens": 601,
"toolCalls": ["list-pages", "get-page"],
"tools": [
{ "name": "list-pages", "durationMs": 92, "success": true },
{ "name": "get-page", "durationMs": 84, "success": true }
],
"msToFirstChunk": 612,
"tokensPerSecond": 47.2,
"finishReason": "stop"
},
"response": { "status": 200, "duration": 1840 }
}
This lets you answer:
- How many conversations happen per day?
- Which tools are slow or fail?
- How long do conversations take, and where is the latency (time-to-first-chunk vs total)?
- What's the token volume, and is it growing?
Going further
By default, the question text is not captured (privacy + payload size) and cost is not estimated. If you want either, override the assistant route in your project and add the missing context:
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log, {
cost: {
'google/gemini-3-flash': { input: 0.075, output: 0.3 }, // $/1M tokens
},
})
const { messages } = await readBody(event)
const lastUserMessage = messages.findLast?.((m: { role: string }) => m.role === 'user')
log.set({
assistant: {
question: typeof lastUserMessage?.content === 'string'
? lastUserMessage.content.slice(0, 500)
: undefined,
},
})
// ... rest of the handler, using `ai.wrap(model)` as usual
})
You then get ai.estimatedCost (in dollars) and assistant.question on every event.
Errors
Any thrown error is captured with its cause chain, the route it broke on, and any context the handler had attached:
{
"request": { "method": "POST", "path": "/mcp" },
"mcp": { "tool": "get-page", "session_id": "session_xyz" },
"content": { "path": "/en/typo", "collectionName": "docs_en" },
"error": {
"message": "Page \"/en/typo\" not found in collection \"docs_en\"",
"why": "No content document matches this path",
"fix": "Call list-pages to discover available paths",
"stack": "..."
},
"response": { "status": 404, "duration": 12 }
}
In dev
Run pnpm dev and any request prints a tree directly in your terminal:
INFO GET /__docus__/assistant 200 (1840ms) [req_abc]
├─ assistant
│ ├─ model "google/gemini-3-flash"
│ └─ tools ["get-page", "list-pages"]
└─ ai
├─ inputTokens 384
├─ outputTokens 217
├─ msToFirstChunk 612
└─ toolCalls ["list-pages", "get-page"]
Set evlog.silent: true in nuxt.config.ts if you want events to flow only to the drain (typical on Vercel where stdout already goes to your platform logs).
Customization
Most users don't need to touch anything beyond the env var. If you do, override defaults in nuxt.config.ts:
export default defineNuxtConfig({
evlog: {
env: { service: 'my-docs' },
sampling: {
// keep 25% of normal info events, all 4xx and any request > 1s
rates: { info: 25 },
keep: [{ status: 400 }, { duration: 1000 }],
},
redact: {
paths: ['headers.authorization', 'body.email'],
},
},
})
Common knobs:
| Option | When to use it |
|---|---|
env.service | You run several docs sites and want them tagged separately in the drain |
silent: true | Stdout already goes somewhere (Vercel, Cloudflare) — avoid double output |
sampling.rates | High-traffic site, you don't need every single event |
sampling.keep | Always keep slow / errored requests regardless of sampling |
redact | Add custom paths on top of the built-in PII redaction |
enabled: false | Turn the whole thing off |
Custom drain (advanced)
If the env-var detection doesn't fit (multiple drains, internal pipeline, conditional routing), drop a server plugin in your project:
import { createAxiomDrain } from 'evlog/axiom'
import { createOTLPDrain } from 'evlog/otlp'
export default defineNitroPlugin((nitroApp) => {
const axiom = createAxiomDrain({ dataset: 'docs-prod' })
const otlp = createOTLPDrain({ endpoint: process.env.OTLP_ENDPOINT! })
nitroApp.hooks.hook('evlog:drain', async (ctx) => {
await Promise.all([axiom(ctx), otlp(ctx)])
})
})
The full drain API is documented at evlog.dev.
Adding context from your own pages
If you wrote a custom server route or a Vue page that you want to track, the same logger is available:
import { useLogger, createError } from 'evlog'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const body = await readBody(event)
log.set({ contact: { topic: body.topic } })
if (!body.email) {
throw createError({
message: 'Email is required',
status: 400,
why: 'Form submission missing email field',
})
}
await sendEmail(body)
return { ok: true }
})
<script setup lang="ts">
import { log } from 'evlog/client'
function onCopy(snippet: string) {
log.info({ action: 'copy_code', code: { length: snippet.length } })
}
</script>
nuxt.config.ts:export default defineNuxtConfig({
evlog: {
transport: { enabled: true },
},
})
/api/_evlog/ingest that receives client events and lets them flow through the same drain pipeline as server events.