| Kimi-K2-Instruct |
kimi-k2-instruct
|
1.00 |
3.00 |
Provider: Hugging Face, Context: 131072, Output Limit: 16384
|
|
| Kimi-K2-Instruct-0905 |
kimi-k2-instruct-0905
|
1.00 |
3.00 |
Provider: Hugging Face, Context: 262144, Output Limit: 16384
|
|
| MiniMax-M2 |
minimax-m2
|
0.30 |
1.20 |
Provider: Hugging Face, Context: 204800, Output Limit: 204800
|
|
| Qwen 3 Embedding 8B |
qwen3-embedding-8b
|
0.01 |
0.00 |
Provider: Hugging Face, Context: 32000, Output Limit: 4096
|
|
| Qwen 3 Embedding 4B |
qwen3-embedding-4b
|
0.01 |
0.00 |
Provider: Hugging Face, Context: 32000, Output Limit: 2048
|
|
| Qwen3-Coder-480B-A35B-Instruct |
qwen3-coder-480b-a35b-instruct
|
2.00 |
2.00 |
Provider: Hugging Face, Context: 262144, Output Limit: 66536
|
|
| Qwen3-235B-A22B-Thinking-2507 |
qwen3-235b-a22b-thinking-2507
|
0.30 |
3.00 |
Provider: Hugging Face, Context: 262144, Output Limit: 131072
|
|
| Qwen3-Next-80B-A3B-Instruct |
qwen3-next-80b-a3b-instruct
|
0.25 |
1.00 |
Provider: Hugging Face, Context: 262144, Output Limit: 66536
|
|
| Qwen3-Next-80B-A3B-Thinking |
qwen3-next-80b-a3b-thinking
|
0.30 |
2.00 |
Provider: Hugging Face, Context: 262144, Output Limit: 131072
|
|
| GLM-4.5 |
glm-4.5
|
0.60 |
2.20 |
Provider: Hugging Face, Context: 131072, Output Limit: 98304
|
|
| GLM-4.6 |
glm-4.6
|
0.60 |
2.20 |
Provider: Hugging Face, Context: 200000, Output Limit: 128000
|
|
| GLM-4.5-Air |
glm-4.5-air
|
0.20 |
1.10 |
Provider: Hugging Face, Context: 128000, Output Limit: 96000
|
|
| DeepSeek-V3-0324 |
deepseek-v3-0324
|
1.25 |
1.25 |
Provider: Hugging Face, Context: 16384, Output Limit: 8192
|
|
| DeepSeek-R1-0528 |
deepseek-r1-0528
|
3.00 |
5.00 |
Provider: Hugging Face, Context: 163840, Output Limit: 163840
|
|