← Back to all models

Qwen3 235B A22B Thinking 2507

qwen3-235b-a22b-thinking-2507

Provider: Nebius Token Factory, Context: 262144, Output Limit: 8192

Available at 18 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
nebius models-dev Input: $0.20 Output: $0.80 Provider: Nebius Token Factory, Context: 262144, Output Limit: 8192
venice models-dev Input: $0.45 Output: $3.50 Provider: Venice AI, Context: 131072, Output Limit: 32768
siliconflowcn models-dev Input: $0.13 Output: $0.60 Provider: SiliconFlow (China), Context: 262000, Output Limit: 262000
chutes models-dev Input: $0.11 Output: $0.60 Provider: Chutes, Context: 262144, Output Limit: 262144
siliconflow models-dev Input: $0.13 Output: $0.60 Provider: SiliconFlow, Context: 262000, Output Limit: 262000
huggingface models-dev Input: $0.30 Output: $3.00 Provider: Hugging Face, Context: 262144, Output Limit: 131072
wandb models-dev Input: $0.10 Output: $0.10 Provider: Weights & Biases, Context: 262144, Output Limit: 131072
iflowcn models-dev Input: $0.00 Output: $0.00 Provider: iFlow, Context: 256000, Output Limit: 64000
synthetic models-dev Input: $0.65 Output: $3.00 Provider: Synthetic, Context: 256000, Output Limit: 32000
submodel models-dev Input: $0.20 Output: $0.60 Provider: submodel, Context: 262144, Output Limit: 131072
nanogpt models-dev Input: $1.00 Output: $2.00 Provider: NanoGPT, Context: 262144, Output Limit: 8192
friendli models-dev Input: - Output: - Provider: Friendli, Context: 131072, Output Limit: 131072
aihubmix models-dev Input: $0.28 Output: $2.80 Provider: AIHubMix, Context: 262144, Output Limit: 262144
ionet models-dev Input: $0.11 Output: $0.60 Provider: IO.NET, Context: 262144, Output Limit: 4096
modelscope models-dev Input: $0.00 Output: $0.00 Provider: ModelScope, Context: 262144, Output Limit: 131072
deepinfra litellm Input: $0.30 Output: $2.90 Source: deepinfra, Context: 262144
fireworksai litellm Input: $0.22 Output: $0.88 Source: fireworks_ai, Context: 262144
openrouter openrouter Input: $0.11 Output: $0.60 Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases. Context: 262144