← Back to all models

Qwen3 Coder 480B A35B Instruct

qwen3-coder

Mixture-of-experts LLM with advanced coding and reasoning capabilities

Available at 8 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
vercel vercel Input: $0.38 Output: $1.53 Mixture-of-experts LLM with advanced coding and reasoning capabilities
poe poe Input: $9,000.00 Output: - Qwen3 Coder 480B A35B Instruct is a state-of-the-art 480B-parameter Mixture-of-Experts model (35B active) that achieves top-tier performance across multiple agentic coding benchmarks. Supports 256K native context length and scales to 1M tokens with extrapolation. All data provided will not be used in training, and is sent only to Fireworks AI, a US-based company.
helicone models-dev Input: $0.22 Output: $0.95 Provider: Helicone, Context: 262144, Output Limit: 16384
opencode models-dev Input: $0.45 Output: $1.80 Provider: OpenCode Zen, Context: 262144, Output Limit: 65536
fastrouter models-dev Input: $0.30 Output: $1.20 Provider: FastRouter, Context: 262144, Output Limit: 66536
iflowcn models-dev Input: $0.00 Output: $0.00 Provider: iFlow, Context: 256000, Output Limit: 64000
nanogpt models-dev Input: $1.00 Output: $2.00 Provider: NanoGPT, Context: 106000, Output Limit: 8192
openrouter openrouter Input: $0.22 Output: $0.95 Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used. Context: 262144