Traces
A trace represents one logical run of your system: a single request, a background job, or a single chat turn. In LLM products, a “single run” often spans many steps—retrieval, multiple model calls, tool execution, and retries—so the trace is the container that makes the whole story navigable.
When to start a new trace
Create one trace per:
- User request (REST/GraphQL request, webhook, queue message)
- Chat turn (user message → assistant response)
- Background job that produces a user-visible outcome
Avoid reusing the same trace for multiple requests; it makes debugging and metrics ambiguous.
What a trace should include
A great trace has enough metadata to answer the first questions you always ask:
- Who:
userIdand/oraccountId(only if policy allows) - Where:
env(dev/staging/prod), region, service name - When: timestamps (usually automatic)
- What version: release id / git SHA / prompt version id
- What happened: a span tree that mirrors your workflow
Correlation IDs
Use correlation IDs so traces connect to the rest of your observability stack:
trace_id: Boson trace id (or your own id propagated into Boson)request_id: edge or gateway request idsession_id: conversation/session identifier spanning turns
Good defaults (recommended)
- Keep trace names stable, e.g.
chat.turn,support.reply,agent.run - Add metadata you will filter on later:
env,release,model,featureFlag - Redact sensitive fields before sending anything off-box