| Qwen 3 235B Instruct |
qwen3-235b-a22b-instruct-2507
|
0.20 |
0.60 |
Provider: Synthetic, Context: 256000, Output Limit: 32000
|
|
| Qwen2.5-Coder-32B-Instruct |
qwen2.5-coder-32b-instruct
|
0.80 |
0.80 |
Provider: Synthetic, Context: 32768, Output Limit: 32768
|
|
| Qwen 3 Coder 480B |
qwen3-coder-480b-a35b-instruct
|
2.00 |
2.00 |
Provider: Synthetic, Context: 256000, Output Limit: 32000
|
|
| Qwen3 235B A22B Thinking 2507 |
qwen3-235b-a22b-thinking-2507
|
0.65 |
3.00 |
Provider: Synthetic, Context: 256000, Output Limit: 32000
|
|
| MiniMax-M2 |
minimax-m2
|
0.55 |
2.19 |
Provider: Synthetic, Context: 196608, Output Limit: 131000
|
|
| MiniMax-M2.1 |
minimax-m2.1
|
0.55 |
2.19 |
Provider: Synthetic, Context: 204800, Output Limit: 131072
|
|
| Llama-3.1-70B-Instruct |
llama-3.1-70b-instruct
|
0.90 |
0.90 |
Provider: Synthetic, Context: 128000, Output Limit: 32768
|
|
| Llama-3.1-8B-Instruct |
llama-3.1-8b-instruct
|
0.20 |
0.20 |
Provider: Synthetic, Context: 128000, Output Limit: 32768
|
|
| Llama-3.3-70B-Instruct |
llama-3.3-70b-instruct
|
0.90 |
0.90 |
Provider: Synthetic, Context: 128000, Output Limit: 32768
|
|
| Llama-4-Scout-17B-16E-Instruct |
llama-4-scout-17b-16e-instruct
|
0.15 |
0.60 |
Provider: Synthetic, Context: 328000, Output Limit: 4096
|
|
| Llama-4-Maverick-17B-128E-Instruct-FP8 |
llama-4-maverick-17b-128e-instruct-fp8
|
0.22 |
0.88 |
Provider: Synthetic, Context: 524000, Output Limit: 4096
|
|
| Llama-3.1-405B-Instruct |
llama-3.1-405b-instruct
|
3.00 |
3.00 |
Provider: Synthetic, Context: 128000, Output Limit: 32768
|
|
| Kimi K2 0905 |
kimi-k2-instruct-0905
|
1.20 |
1.20 |
Provider: Synthetic, Context: 262144, Output Limit: 32768
|
|
| Kimi K2 Thinking |
kimi-k2-thinking
|
0.55 |
2.19 |
Provider: Synthetic, Context: 262144, Output Limit: 262144
|
|
| GLM 4.5 |
glm-4.5
|
0.55 |
2.19 |
Provider: Synthetic, Context: 128000, Output Limit: 96000
|
|
| GLM 4.7 |
glm-4.7
|
0.55 |
2.19 |
Provider: Synthetic, Context: 200000, Output Limit: 64000
|
|
| GLM 4.6 |
glm-4.6
|
0.55 |
2.19 |
Provider: Synthetic, Context: 200000, Output Limit: 64000
|
|
| DeepSeek R1 |
deepseek-r1
|
0.55 |
2.19 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| DeepSeek R1 (0528) |
deepseek-r1-0528
|
3.00 |
8.00 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| DeepSeek V3.1 Terminus |
deepseek-v3.1-terminus
|
1.20 |
1.20 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| DeepSeek V3.2 |
deepseek-v3.2
|
0.27 |
0.40 |
Provider: Synthetic, Context: 162816, Output Limit: 8000
|
|
| DeepSeek V3 |
deepseek-v3
|
1.25 |
1.25 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| DeepSeek V3.1 |
deepseek-v3.1
|
0.56 |
1.68 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| DeepSeek V3 (0324) |
deepseek-v3-0324
|
1.20 |
1.20 |
Provider: Synthetic, Context: 128000, Output Limit: 128000
|
|
| GPT OSS 120B |
gpt-oss-120b
|
0.10 |
0.10 |
Provider: Synthetic, Context: 128000, Output Limit: 32768
|
|