| GPT-4.1 nano |
gpt-4.1-nano
|
0.10 |
0.40 |
Provider: AIHubMix, Context: 1047576, Output Limit: 32768
|
|
| GLM-4.7 |
glm-4.7
|
0.27 |
1.10 |
Provider: AIHubMix, Context: 204800, Output Limit: 131072
|
|
| Qwen3 235B A22B Instruct 2507 |
qwen3-235b-a22b-instruct-2507
|
0.28 |
1.12 |
Provider: AIHubMix, Context: 262144, Output Limit: 262144
|
|
| Claude Opus 4.1 |
claude-opus-4-1
|
16.50 |
82.50 |
Provider: AIHubMix, Context: 200000, Output Limit: 32000
|
|
| GPT-5.1 Codex |
gpt-5.1-codex
|
1.25 |
10.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| Claude Haiku 4.5 |
claude-haiku-4-5
|
1.10 |
5.50 |
Provider: AIHubMix, Context: 200000, Output Limit: 64000
|
|
| Claude Opus 4.5 |
claude-opus-4-5
|
5.00 |
25.00 |
Provider: AIHubMix, Context: 200000, Output Limit: 32000
|
|
| Gemini 3 Pro Preview |
gemini-3-pro-preview
|
2.00 |
12.00 |
Provider: AIHubMix, Context: 1000000, Output Limit: 65000
|
|
| Gemini 2.5 Flash |
gemini-2.5-flash
|
0.08 |
0.30 |
Provider: AIHubMix, Context: 1000000, Output Limit: 65000
|
|
| GPT-4.1 mini |
gpt-4.1-mini
|
0.40 |
1.60 |
Provider: AIHubMix, Context: 1047576, Output Limit: 32768
|
|
| Claude Sonnet 4.5 |
claude-sonnet-4-5
|
3.30 |
16.50 |
Provider: AIHubMix, Context: 200000, Output Limit: 64000
|
|
| Coding GLM-4.7 Free |
coding-glm-4.7-free
|
0.00 |
0.00 |
Provider: AIHubMix, Context: 204800, Output Limit: 131072
|
|
| GPT-5.1 Codex Mini |
gpt-5.1-codex-mini
|
0.25 |
2.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| Qwen3 235B A22B Thinking 2507 |
qwen3-235b-a22b-thinking-2507
|
0.28 |
2.80 |
Provider: AIHubMix, Context: 262144, Output Limit: 262144
|
|
| GPT-5.1 |
gpt-5.1
|
1.25 |
10.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| GPT-5-Nano |
gpt-5-nano
|
0.50 |
2.00 |
Provider: AIHubMix, Context: 128000, Output Limit: 16384
|
|
| GPT-5-Codex |
gpt-5-codex
|
1.25 |
10.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| GPT-4o |
gpt-4o
|
2.50 |
10.00 |
Provider: AIHubMix, Context: 128000, Output Limit: 16384
|
|
| GPT-4.1 |
gpt-4.1
|
2.00 |
8.00 |
Provider: AIHubMix, Context: 1047576, Output Limit: 32768
|
|
| o4-mini |
o4-mini
|
1.50 |
6.00 |
Provider: AIHubMix, Context: 200000, Output Limit: 65536
|
|
| GPT-5-Mini |
gpt-5-mini
|
1.50 |
6.00 |
Provider: AIHubMix, Context: 200000, Output Limit: 64000
|
|
| Gemini 2.5 Pro |
gemini-2.5-pro
|
1.25 |
5.00 |
Provider: AIHubMix, Context: 2000000, Output Limit: 65000
|
|
| GPT-4o (2024-11-20) |
gpt-4o-2024-11-20
|
2.50 |
10.00 |
Provider: AIHubMix, Context: 128000, Output Limit: 16384
|
|
| GPT-5.1-Codex-Max |
gpt-5.1-codex-max
|
1.25 |
10.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| MiniMax M2.1 Free |
minimax-m2.1-free
|
0.00 |
0.00 |
Provider: AIHubMix, Context: 204800, Output Limit: 131072
|
|
| Qwen3 Coder 480B A35B Instruct |
qwen3-coder-480b-a35b-instruct
|
0.82 |
3.29 |
Provider: AIHubMix, Context: 262144, Output Limit: 131000
|
|
| DeepSeek-V3.2-Think |
deepseek-v3.2-think
|
0.30 |
0.45 |
Provider: AIHubMix, Context: 131000, Output Limit: 64000
|
|
| GPT-5 |
gpt-5
|
5.00 |
20.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| MiniMax M2.1 |
minimax-m2.1
|
0.29 |
1.15 |
Provider: AIHubMix, Context: 204800, Output Limit: 131072
|
|
| DeepSeek-V3.2 |
deepseek-v3.2
|
0.30 |
0.45 |
Provider: AIHubMix, Context: 131000, Output Limit: 64000
|
|
| Kimi K2 0905 |
kimi-k2-0905
|
0.55 |
2.19 |
Provider: AIHubMix, Context: 262144, Output Limit: 262144
|
|
| GPT-5-Pro |
gpt-5-pro
|
7.00 |
28.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|
| GPT-5.2 |
gpt-5.2
|
1.75 |
14.00 |
Provider: AIHubMix, Context: 400000, Output Limit: 128000
|
|