| GPT OSS 120B |
gpt-oss-120b
|
0.10 |
0.50 |
Provider: submodel, Context: 131072, Output Limit: 32768
|
|
| Qwen3 235B A22B Instruct 2507 |
qwen3-235b-a22b-instruct-2507
|
0.20 |
0.30 |
Provider: submodel, Context: 262144, Output Limit: 131072
|
|
| Qwen3 Coder 480B A35B Instruct |
qwen3-coder-480b-a35b-instruct-fp8
|
0.20 |
0.80 |
Provider: submodel, Context: 262144, Output Limit: 262144
|
|
| Qwen3 235B A22B Thinking 2507 |
qwen3-235b-a22b-thinking-2507
|
0.20 |
0.60 |
Provider: submodel, Context: 262144, Output Limit: 131072
|
|
| GLM 4.5 FP8 |
glm-4.5-fp8
|
0.20 |
0.80 |
Provider: submodel, Context: 131072, Output Limit: 131072
|
|
| GLM 4.5 Air |
glm-4.5-air
|
0.10 |
0.50 |
Provider: submodel, Context: 131072, Output Limit: 131072
|
|
| DeepSeek R1 0528 |
deepseek-r1-0528
|
0.50 |
2.15 |
Provider: submodel, Context: 75000, Output Limit: 163840
|
|
| DeepSeek V3.1 |
deepseek-v3.1
|
0.20 |
0.80 |
Provider: submodel, Context: 75000, Output Limit: 163840
|
|
| DeepSeek V3 0324 |
deepseek-v3-0324
|
0.20 |
0.80 |
Provider: submodel, Context: 75000, Output Limit: 163840
|
|