Usage Accounting

The OpenRouter API provides built-in Usage Accounting that allows you to track AI model usage without making additional API calls. This feature provides detailed information about token counts, costs, and caching status directly in your API responses.

Usage Information

When enabled, the API will return detailed usage information including:

  1. Prompt and completion token counts using the model’s native tokenizer
  2. Cost in credits
  3. Reasoning token counts (if applicable)
  4. Cached token counts (if available)

This information is included in the last SSE message for streaming responses, or in the complete response for non-streaming requests.

Enabling Usage Accounting

You can enable usage accounting in your requests by including the usage parameter:

1{
2 "model": "your-model",
3 "messages": [],
4 "usage": {
5 "include": true
6 }
7}

Response Format

When usage accounting is enabled, the response will include a usage object with detailed token information:

1{
2 "object": "chat.completion.chunk",
3 "usage": {
4 "completion_tokens": 2,
5 "completion_tokens_details": {
6 "reasoning_tokens": 0
7 },
8 "cost": 197,
9 "prompt_tokens": 194,
10 "prompt_tokens_details": {
11 "cached_tokens": 0
12 },
13 "total_tokens": 196
14 }
15}
Performance Impact

Enabling usage accounting will add a few hundred milliseconds to the last response as the API calculates token counts and costs. This only affects the final message and does not impact overall streaming performance.

Benefits

  1. Efficiency: Get usage information without making separate API calls
  2. Accuracy: Token counts are calculated using the model’s native tokenizer
  3. Transparency: Track costs and cached token usage in real-time
  4. Detailed Breakdown: Separate counts for prompt, completion, reasoning, and cached tokens

Best Practices

  1. Enable usage tracking when you need to monitor token consumption or costs
  2. Account for the slight delay in the final response when usage accounting is enabled
  3. Consider implementing usage tracking in development to optimize token usage before production
  4. Use the cached token information to optimize your application’s performance

Alternative: Getting Usage via Generation ID

You can also retrieve usage information asynchronously by using the generation ID returned from your API calls. This is particularly useful when you want to fetch usage statistics after the completion has finished or when you need to audit historical usage.

To use this method:

  1. Make your chat completion request as normal
  2. Note the id field in the response
  3. Use that ID to fetch usage information via the /generation endpoint

For more details on this approach, see the Get a Generation documentation.

Examples

Basic Usage with Token Tracking

1import requests
2import json
3
4url = "https://openrouter.ai/api/v1/chat/completions"
5headers = {
6 "Authorization": f"Bearer {{API_KEY_REF}}",
7 "Content-Type": "application/json"
8}
9payload = {
10 "model": "{{MODEL}}",
11 "messages": [
12 {"role": "user", "content": "What is the capital of France?"}
13 ],
14 "usage": {
15 "include": True
16 }
17}
18
19response = requests.post(url, headers=headers, data=json.dumps(payload))
20print("Response:", response.json()['choices'][0]['message']['content'])
21print("Usage Stats:", response.json()['usage'])

Streaming with Usage Information

This example shows how to handle usage information in streaming mode:

1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://openrouter.ai/api/v1",
5 api_key="{{API_KEY_REF}}",
6)
7
8def chat_completion_with_usage(messages):
9 response = client.chat.completions.create(
10 model="{{MODEL}}",
11 messages=messages,
12 usage={
13 "include": True
14 },
15 stream=True
16 )
17 return response
18
19for chunk in chat_completion_with_usage([
20 {"role": "user", "content": "Write a haiku about Paris."}
21]):
22 if hasattr(chunk, 'usage'):
23 print(f"\nUsage Statistics:")
24 print(f"Total Tokens: {chunk.usage.total_tokens}")
25 print(f"Prompt Tokens: {chunk.usage.prompt_tokens}")
26 print(f"Completion Tokens: {chunk.usage.completion_tokens}")
27 print(f"Cost: {chunk.usage.cost} credits")
28 elif chunk.choices[0].delta.content:
29 print(chunk.choices[0].delta.content, end="")