Skip to main content
The LLM Activity Dashboard gives you visibility into how AI coding agents are using LLM APIs across your organization. Navigate to Monitor > Dashboards and select the LLM Activity tab. Data is shown for the last 30 days.
Image

Key Metrics

The dashboard displays five summary cards at the top:
MetricDescription
Total RequestsNumber of LLM API calls made through Turen (click to jump to LLM events)
Avg LatencyMean response time from LLM providers
Error RatePercentage of failed LLM API calls, with error count
Data VolumeTotal request and response payload sizes
Active AgentsNumber of agents that sent LLM requests in the period (click to jump to fleet)

Provider Breakdown

A pie chart showing how usage is distributed across LLM providers:
  • Anthropic — Claude models
  • OpenAI — GPT models
  • Google — Gemini models
  • Mistral — Mistral models
  • Cohere — Cohere models

Model Breakdown

A horizontal bar chart showing which specific models your team is using (top 8). Model names are displayed in a friendly format (e.g., “Claude Opus 4.6”, “GPT-5.2 Pro”). This helps you:
  • Track adoption of newer models
  • Understand usage distribution
  • Identify usage patterns across your team

Session Activity

Image
The bottom section shows detailed session metrics:
  • Total Sessions — Number of Claude Code sessions in the period
  • Total Messages — Messages exchanged across all sessions, with daily average
  • Avg Duration — Average session length
  • Unique Clients — Number of distinct machines with sessions
  • Sessions This Week — Daily session count bar chart
  • Activity by Hour — 24-cell heatmap showing when your team is most active
  • Top Clients — Ranked list of machines with the most session activity, showing hostname, session count, total bytes, and last active time