deepseek-r1-0528
Provider: Nvidia, Context: 128000, Output Limit: 4096
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| nvidia | models-dev | Input: $0.00 | Output: $0.00 | Provider: Nvidia, Context: 128000, Output Limit: 4096 | |
| alibabacn | models-dev | Input: $0.57 | Output: $2.29 | Provider: Alibaba (China), Context: 131072, Output Limit: 16384 | |
| githubmodels | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Models, Context: 65536, Output Limit: 8192 | |
| azure | models-dev | Input: $1.35 | Output: $5.40 | Provider: Azure, Context: 163840, Output Limit: 163840 | |
| huggingface | models-dev | Input: $3.00 | Output: $5.00 | Provider: Hugging Face, Context: 163840, Output Limit: 163840 | |
| wandb | models-dev | Input: $1.35 | Output: $5.40 | Provider: Weights & Biases, Context: 161000, Output Limit: 163840 | |
| synthetic | models-dev | Input: $3.00 | Output: $8.00 | Provider: Synthetic, Context: 128000, Output Limit: 128000 | |
| submodel | models-dev | Input: $0.50 | Output: $2.15 | Provider: submodel, Context: 75000, Output Limit: 163840 | |
| friendli | models-dev | Input: - | Output: - | Provider: Friendli, Context: 163840, Output Limit: 163840 | |
| fireworksai | models-dev | Input: $3.00 | Output: $8.00 | Provider: Fireworks AI, Context: 160000, Output Limit: 16384 | |
| ionet | models-dev | Input: $2.00 | Output: $8.75 | Provider: IO.NET, Context: 128000, Output Limit: 4096 | |
| azurecognitiveservices | models-dev | Input: $1.35 | Output: $5.40 | Provider: Azure Cognitive Services, Context: 163840, Output Limit: 163840 | |
| deepinfra | litellm | Input: $0.50 | Output: $2.15 | Source: deepinfra, Context: 163840 | |
| hyperbolic | litellm | Input: $0.25 | Output: $0.25 | Source: hyperbolic, Context: 131072 | |
| lambdaai | litellm | Input: $0.20 | Output: $0.60 | Source: lambda_ai, Context: 131072 | |
| openrouter | openrouter | Input: $0.40 | Output: $1.75 | May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model. Context: 163840 |