| Kimi K2 Thinking |
kimi-k2-thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 32768, Output Limit: 8192
|
|
| Kimi K2 Instruct |
kimi-k2-instruct
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 131072, Output Limit: 8192
|
|
| Hermes 4 405b Thinking |
hermes-4-405b:thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| Llama 3 3 Nemotron Super 49B V1 5 |
llama-3_3-nemotron-super-49b-v1_5
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| Deepseek V3.2 Thinking |
deepseek-v3.2:thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| Deepseek R1 |
deepseek-r1
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| Minimax M2.1 |
minimax-m2.1
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GPT Oss 120b |
gpt-oss-120b
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GLM 4.6 Thinking |
glm-4.6:thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GLM 4.6 |
glm-4.6
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 200000, Output Limit: 8192
|
|
| Qwen3 Coder |
qwen3-coder
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 106000, Output Limit: 8192
|
|
| Qwen3 235B A22B Thinking 2507 |
qwen3-235b-a22b-thinking-2507
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 262144, Output Limit: 8192
|
|
| Devstral 2 123b Instruct 2512 |
devstral-2-123b-instruct-2512
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 131072, Output Limit: 8192
|
|
| Mistral Large 3 675b Instruct 2512 |
mistral-large-3-675b-instruct-2512
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 131072, Output Limit: 8192
|
|
| Ministral 14b Instruct 2512 |
ministral-14b-instruct-2512
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 131072, Output Limit: 8192
|
|
| Llama 4 Maverick |
llama-4-maverick
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| Llama 3.3 70b Instruct |
llama-3.3-70b-instruct
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GLM 4.7 |
glm-4.7
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 204800, Output Limit: 8192
|
|
| GLM 4.5 Air |
glm-4.5-air
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GLM 4.7 Thinking |
glm-4.7:thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|
| GLM 4.5 Air Thinking |
glm-4.5-air:thinking
|
1.00 |
2.00 |
Provider: NanoGPT, Context: 128000, Output Limit: 8192
|
|