qwen3-235b-a22b
Provider: Alibaba, Context: 131072, Output Limit: 16384
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| alibaba | models-dev | Input: $0.70 | Output: $2.80 | Provider: Alibaba, Context: 131072, Output Limit: 16384 | |
| nvidia | models-dev | Input: $0.00 | Output: $0.00 | Provider: Nvidia, Context: 131072, Output Limit: 8192 | |
| alibabacn | models-dev | Input: $0.29 | Output: $1.15 | Provider: Alibaba (China), Context: 131072, Output Limit: 16384 | |
| siliconflowcn | models-dev | Input: $0.35 | Output: $1.42 | Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000 | |
| chutes | models-dev | Input: $0.30 | Output: $1.20 | Provider: Chutes, Context: 40960, Output Limit: 40960 | |
| siliconflow | models-dev | Input: $0.35 | Output: $1.42 | Provider: SiliconFlow, Context: 131000, Output Limit: 131000 | |
| fireworksai | models-dev | Input: $0.22 | Output: $0.88 | Provider: Fireworks AI, Context: 128000, Output Limit: 16384 | |
| deepinfra | litellm | Input: $0.18 | Output: $0.54 | Source: deepinfra, Context: 40960 | |
| hyperbolic | litellm | Input: $2.00 | Output: $2.00 | Source: hyperbolic, Context: 131072 | |
| openrouter | openrouter | Input: $0.18 | Output: $0.54 | Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling. Context: 40960 |