Models
Explore and browse 400+ models and providers on our website, or with our API (including RSS).
Models API Standard
Our Models API makes the most important information about all LLMs freely available as soon as we confirm it.
API Response Schema
The Models API returns a standardized JSON response format that provides comprehensive metadata for each available model. This schema is cached at the edge and designed for reliable integration for production applications.
Root Response Object
Model Object Schema
Each model in the data
array contains the following standardized fields:
Architecture Object
Pricing Object
All pricing values are in USD per token/request/unit. A value of "0"
indicates the feature is free.
Top Provider Object
Supported Parameters
The supported_parameters
array indicates which OpenAI-compatible parameters work with each model:
tools
- Function calling capabilitiestool_choice
- Tool selection controlmax_tokens
- Response length limitingtemperature
- Randomness controltop_p
- Nucleus samplingreasoning
- Internal reasoning modeinclude_reasoning
- Include reasoning in responsestructured_outputs
- JSON schema enforcementresponse_format
- Output format specificationstop
- Custom stop sequencesfrequency_penalty
- Repetition reductionpresence_penalty
- Topic diversityseed
- Deterministic outputs
Different models tokenize text in different ways
Some models break up text into chunks of multiple characters (GPT, Claude,
Llama, etc), while others tokenize by character (PaLM). This means that token
counts (and therefore costs) will vary between models, even when inputs and
outputs are the same. Costs are displayed and billed according to the
tokenizer for the model in use. You can use the usage
field in the response
to get the token counts for the input and output.
If there are models or providers you are interested in that OpenRouter doesn’t have, please tell us about them in our Discord channel.
For Providers
If you’re interested in working with OpenRouter, you can learn more on our providers page.