Skip to main content
The LLM Analytics dashboard gives you visibility into how AI coding agents are using LLM APIs across your organization. Data is shown for the last 30 days.
Image

Key Metrics

The dashboard displays metrics across two rows:
MetricDescription
Total RequestsNumber of LLM API calls made through Turen
Total TokensCombined input and output tokens (with in/out breakdown)
Avg LatencyMean response time from LLM providers
Error RatePercentage of failed LLM API calls
Rules InjectedNumber of policy rules applied to requests
Request DataTotal request payload size in bytes
Response DataTotal response payload size in bytes
Active AgentsNumber of agents that sent LLM requests in the period

Provider Breakdown

See how usage is distributed across LLM providers:
  • Anthropic — Claude models
  • OpenAI — GPT models

Model Breakdown

A horizontal bar chart showing which specific models your team is using. This helps you:
  • Track adoption of newer models
  • Understand usage distribution
  • Identify usage patterns across your team

Token Usage Summary

A breakdown of token consumption:
  • Input Tokens — Total tokens sent to providers
  • Output Tokens — Total tokens received from providers
  • Avg Tokens/Request — Average token usage per API call
Image

Session Activity

Detailed session metrics including:
  • Total Sessions — Number of Claude Code sessions in the period
  • Total Messages — Messages exchanged across all sessions
  • Avg Duration — Average session length
  • Unique Clients — Number of distinct machines with sessions
  • Sessions This Week — Daily session count bar chart
  • Activity by Hour — Heatmap showing when your team is most active
  • Top Clients — Machines with the most session activity