
Key Metrics
The dashboard displays metrics across two rows:| Metric | Description |
|---|---|
| Total Requests | Number of LLM API calls made through Turen |
| Total Tokens | Combined input and output tokens (with in/out breakdown) |
| Avg Latency | Mean response time from LLM providers |
| Error Rate | Percentage of failed LLM API calls |
| Rules Injected | Number of policy rules applied to requests |
| Request Data | Total request payload size in bytes |
| Response Data | Total response payload size in bytes |
| Active Agents | Number of agents that sent LLM requests in the period |
Provider Breakdown
See how usage is distributed across LLM providers:- Anthropic — Claude models
- OpenAI — GPT models
Model Breakdown
A horizontal bar chart showing which specific models your team is using. This helps you:- Track adoption of newer models
- Understand usage distribution
- Identify usage patterns across your team
Token Usage Summary
A breakdown of token consumption:- Input Tokens — Total tokens sent to providers
- Output Tokens — Total tokens received from providers
- Avg Tokens/Request — Average token usage per API call

Session Activity
Detailed session metrics including:- Total Sessions — Number of Claude Code sessions in the period
- Total Messages — Messages exchanged across all sessions
- Avg Duration — Average session length
- Unique Clients — Number of distinct machines with sessions
- Sessions This Week — Daily session count bar chart
- Activity by Hour — Heatmap showing when your team is most active
- Top Clients — Machines with the most session activity