qwen3-14b
Provider: Alibaba, Context: 131072, Output Limit: 8192
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| alibaba | models-dev | Input: $0.35 | Output: $1.40 | Provider: Alibaba, Context: 131072, Output Limit: 8192 | |
| alibabacn | models-dev | Input: $0.14 | Output: $0.57 | Provider: Alibaba (China), Context: 131072, Output Limit: 8192 | |
| siliconflowcn | models-dev | Input: $0.07 | Output: $0.28 | Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000 | |
| chutes | models-dev | Input: $0.05 | Output: $0.22 | Provider: Chutes, Context: 40960, Output Limit: 40960 | |
| siliconflow | models-dev | Input: $0.07 | Output: $0.28 | Provider: SiliconFlow, Context: 131000, Output Limit: 131000 | |
| deepinfra | litellm | Input: $0.06 | Output: $0.24 | Source: deepinfra, Context: 40960 | |
| fireworksai | litellm | Input: $0.20 | Output: $0.20 | Source: fireworks_ai, Context: 40960 | |
| openrouter | openrouter | Input: $0.05 | Output: $0.22 | Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, programming, and logical inference, and a "non-thinking" mode for general-purpose conversation. The model is fine-tuned for instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling. Context: 40960 |