← Back to all models

Qwen/Qwen3-30B-A3B

qwen3-30b-a3b

Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000

Available at 9 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
siliconflowcn models-dev Input: $0.09 Output: $0.45 Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000
chutes models-dev Input: $0.06 Output: $0.22 Provider: Chutes, Context: 40960, Output Limit: 40960
siliconflow models-dev Input: $0.09 Output: $0.45 Provider: SiliconFlow, Context: 131000, Output Limit: 131000
helicone models-dev Input: $0.08 Output: $0.29 Provider: Helicone, Context: 41000, Output Limit: 41000
friendli models-dev Input: - Output: - Provider: Friendli, Context: 131072, Output Limit: 8000
dashscope litellm Input: $0.00 Output: $0.00 Source: dashscope, Context: 129024
deepinfra litellm Input: $0.08 Output: $0.29 Source: deepinfra, Context: 40960
fireworksai litellm Input: $0.15 Output: $0.60 Source: fireworks_ai, Context: 131072
openrouter openrouter Input: $0.06 Output: $0.22 Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models. Context: 40960