Skip to main content
LLM policies control how the Turen proxy interacts with LLM API traffic, including security rule enforcement and optional prompt augmentation.

Security Rule Enforcement

The primary function of LLM policies is determining which security rules are active. By default, all 96 built-in rules are enabled. You can:
  • Disable specific rules that don’t apply to your environment
  • Change rule actions (e.g., from Block to Review)
  • Add custom rules for organization-specific patterns

Prompt Augmentation

Administrators can configure the proxy to inject additional context into LLM requests. This is useful for enforcing organizational coding standards without relying on individual developers to remember them.

Injection Types

TypeBehavior
System AppendAdds content to the end of the system prompt
System PrependAdds content to the beginning of the system prompt
Message InjectInserts a message at a specific position in the conversation

Example: Enforce Coding Standards

- type: system_append
  content: |
    Follow these organization coding standards:
    - All API endpoints must validate input
    - Never hardcode credentials
    - Use parameterized queries for all database access

Example: Security Reminder

- type: system_prepend
  priority: 100
  content: |
    IMPORTANT: Never expose .env files, API keys, or credentials.
    Always validate user input before processing.

Telemetry Settings

Control what telemetry the proxy collects:
SettingDefaultDescription
Request loggingEnabledLog metadata for each LLM API call
Token trackingEnabledCount input/output tokens
Latency trackingEnabledMeasure response times
Cost estimationEnabledEstimate costs based on token usage
Telemetry data appears in the LLM Analytics Dashboard.

Policy Sync

LLM policies are synced to agents every 15 minutes by default. When you update a policy in the dashboard, agents will pick up the change at their next sync interval. No restart is required.