qwen3-8b
Provider: Alibaba, Context: 131072, Output Limit: 8192
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| alibaba | models-dev | Input: $0.18 | Output: $0.70 | Provider: Alibaba, Context: 131072, Output Limit: 8192 | |
| alibabacn | models-dev | Input: $0.07 | Output: $0.29 | Provider: Alibaba (China), Context: 131072, Output Limit: 8192 | |
| siliconflowcn | models-dev | Input: $0.06 | Output: $0.06 | Provider: SiliconFlow (China), Context: 131000, Output Limit: 131000 | |
| siliconflow | models-dev | Input: $0.06 | Output: $0.06 | Provider: SiliconFlow, Context: 131000, Output Limit: 131000 | |
| fireworksai | litellm | Input: $0.20 | Output: $0.20 | Source: fireworks_ai, Context: 40960 | |
| openrouter | openrouter | Input: $0.04 | Output: $0.14 | Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling. Context: 128000 |