Telemetry Configuration
Enable distributed tracing to monitor agent behavior, debug issues, and track performance using OpenTelemetry.
Complete Reference
For complete field documentation, backend setup, and collector configuration, see agent.yml → Telemetry.
Overview
Telemetry provides visibility into your agent's operations through distributed tracing. When enabled, Dexto automatically traces agent operations, LLM calls, and tool executions.
What you get:
- Complete request lifecycle traces
- LLM token usage tracking
- Tool execution monitoring
- Export to any OTLP-compatible backend
Quick Start
1. Start Jaeger (Local)
docker run -d \
--name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
2. Configure Agent
telemetry:
enabled: true
serviceName: my-agent
export:
type: otlp
endpoint: http://localhost:4318/v1/traces
3. View Traces
Open http://localhost:16686 and explore your traces.
Configuration Options
telemetry:
enabled: boolean # Turn on/off (default: false)
serviceName: string # Service identifier in traces
tracerName: string # Tracer name (default: 'dexto-tracer')
export:
type: 'otlp' | 'console' # Export destination
protocol: 'http' | 'grpc' # OTLP protocol (default: 'http')
endpoint: string # Backend URL
headers: # Optional auth headers
[key: string]: string
Export Types
OTLP (Production)
Export to OTLP-compatible backends:
telemetry:
enabled: true
serviceName: my-prod-agent
export:
type: otlp
endpoint: http://localhost:4318/v1/traces
Console (Development)
Print traces to terminal:
telemetry:
enabled: true
export:
type: console
Common Configurations
Local Jaeger
telemetry:
enabled: true
serviceName: my-dev-agent
export:
type: otlp
protocol: http
endpoint: http://localhost:4318/v1/traces
Grafana Cloud
telemetry:
enabled: true
serviceName: my-prod-agent
export:
type: otlp
endpoint: https://otlp-gateway-prod.grafana.net/otlp
headers:
authorization: "Basic $GRAFANA_CLOUD_TOKEN"
Honeycomb
telemetry:
enabled: true
serviceName: my-prod-agent
export:
type: otlp
endpoint: https://api.honeycomb.io:443
headers:
x-honeycomb-team: $HONEYCOMB_API_KEY
What Gets Traced
Dexto automatically traces:
- Agent operations - Full request lifecycle
- LLM calls - Model invocations with token counts
- Tool executions - Tool calls and results
Key attributes:
gen_ai.usage.input_tokens- Prompt tokensgen_ai.usage.output_tokens- Completion tokensllm.provider- Provider namellm.model- Model identifier
Use Cases
| Scenario | How Telemetry Helps |
|---|---|
| Debug slow requests | Identify bottlenecks in traces |
| Monitor token usage | Track LLM costs and optimize prompts |
| Production monitoring | Set alerts for errors and latency |
| Performance optimization | Find inefficient operations |
Performance Impact
Minimal overhead:
- ~1-2ms per span
- Async export (non-blocking)
- Automatic batching
For high-volume agents, consider sampling or using a collector.
Best Practices
- Enable in production - Essential for observability
- Use meaningful service names - Different names per deployment
- Set up monitoring - Create alerts for issues
- Consider sampling - For high-traffic scenarios
- Use collectors - For advanced processing and buffering
See Also
- agent.yml Reference → Telemetry - Complete field documentation
- OpenTelemetry Docs - Official OTEL documentation
- Jaeger Docs - Jaeger tracing platform