← Back to all models

Qwen3 32B

qwen3-32b

Provider: Alibaba, Context: 131072, Output Limit: 16384

Available at 15 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
alibaba models-dev Input: $0.70 Output: $2.80 Provider: Alibaba, Context: 131072, Output Limit: 16384
groq models-dev Input: $0.29 Output: $0.59 Provider: Groq, Context: 131072, Output Limit: 16384
alibabacn models-dev Input: $0.29 Output: $1.15 Provider: Alibaba (China), Context: 131072, Output Limit: 16384
siliconflowcn models-dev Input: $0.14 Output: $0.57 Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000
chutes models-dev Input: $0.08 Output: $0.24 Provider: Chutes, Context: 40960, Output Limit: 40960
cortecs models-dev Input: $0.10 Output: $0.33 Provider: Cortecs, Context: 16384, Output Limit: 16384
siliconflow models-dev Input: $0.14 Output: $0.57 Provider: SiliconFlow, Context: 131000, Output Limit: 131000
helicone models-dev Input: $0.29 Output: $0.59 Provider: Helicone, Context: 131072, Output Limit: 40960
ovhcloud models-dev Input: $0.09 Output: $0.25 Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
iflowcn models-dev Input: $0.00 Output: $0.00 Provider: iFlow, Context: 128000, Output Limit: 32000
friendli models-dev Input: - Output: - Provider: Friendli, Context: 131072, Output Limit: 8000
deepinfra litellm Input: $0.10 Output: $0.28 Source: deepinfra, Context: 40960
sambanova litellm Input: $0.40 Output: $0.80 Source: sambanova, Context: 8192
fireworksai litellm Input: $0.90 Output: $0.90 Source: fireworks_ai, Context: 131072
openrouter openrouter Input: $0.08 Output: $0.24 Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling. Context: 40960