← Back to all models

DeepSeek R1 0528

deepseek-r1

The latest revision of DeepSeek's first-generation reasoning model

Available at 22 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
vercel vercel Input: $0.50 Output: $2.15 The latest revision of DeepSeek's first-generation reasoning model
together together Input: $3.00 Output: $7.00 -
poe poe Input: $18,000.00 Output: - Top open-source reasoning LLM rivaling OpenAI's o1 model; delivers top-tier performance across math, code, and reasoning tasks at a fraction of the cost. All data you provide this bot will not be used in training, and is sent only to Together AI, a US-based company. Supports 164k tokens of input context and 33k tokens of output context. Uses the latest May 28th snapshot (DeepSeek-R1-0528).
nvidia models-dev Input: $0.00 Output: $0.00 Provider: Nvidia, Context: 128000, Output Limit: 4096
alibabacn models-dev Input: $0.57 Output: $2.29 Provider: Alibaba (China), Context: 131072, Output Limit: 16384
siliconflowcn models-dev Input: $0.50 Output: $2.18 Provider: SiliconFlow (China), Context: 164000, Output Limit: 164000
githubmodels models-dev Input: $0.00 Output: $0.00 Provider: GitHub Models, Context: 65536, Output Limit: 8192
togetherai models-dev Input: $3.00 Output: $7.00 Provider: Together AI, Context: 163839, Output Limit: 12288
azure models-dev Input: $1.35 Output: $5.40 Provider: Azure, Context: 163840, Output Limit: 163840
siliconflow models-dev Input: $0.50 Output: $2.18 Provider: SiliconFlow, Context: 164000, Output Limit: 164000
iflowcn models-dev Input: $0.00 Output: $0.00 Provider: iFlow, Context: 128000, Output Limit: 32000
synthetic models-dev Input: $0.55 Output: $2.19 Provider: Synthetic, Context: 128000, Output Limit: 128000
nanogpt models-dev Input: $1.00 Output: $2.00 Provider: NanoGPT, Context: 128000, Output Limit: 8192
azurecognitiveservices models-dev Input: $1.35 Output: $5.40 Provider: Azure Cognitive Services, Context: 163840, Output Limit: 163840
azureai litellm Input: $1.35 Output: $5.40 Source: azure_ai, Context: 128000
deepinfra litellm Input: $0.70 Output: $2.40 Source: deepinfra, Context: 163840
deepseek litellm Input: $0.55 Output: $2.19 Source: deepseek, Context: 65536
fireworksai litellm Input: $3.00 Output: $8.00 Source: fireworks_ai, Context: 128000
hyperbolic litellm Input: $0.40 Output: $0.40 Source: hyperbolic, Context: 32768
sambanova litellm Input: $5.00 Output: $7.00 Source: sambanova, Context: 32768
snowflake litellm Input: $0.00 Output: $0.00 Source: snowflake, Context: 32768
openrouter openrouter Input: $0.70 Output: $2.40 DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely! Context: 163840