| Mixtral-8x7B-Instruct-v0.1 |
mixtral-8x7b-instruct-v0.1
|
0.70 |
0.70 |
Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
|
|
| Mistral-7B-Instruct-v0.3 |
mistral-7b-instruct-v0.3
|
0.11 |
0.11 |
Provider: OVHcloud AI Endpoints, Context: 127000, Output Limit: 127000
|
|
| Llama-3.1-8B-Instruct |
llama-3.1-8b-instruct
|
0.11 |
0.11 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| Qwen2.5-VL-72B-Instruct |
qwen2.5-vl-72b-instruct
|
1.01 |
1.01 |
Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
|
|
| Mistral-Nemo-Instruct-2407 |
mistral-nemo-instruct-2407
|
0.14 |
0.14 |
Provider: OVHcloud AI Endpoints, Context: 118000, Output Limit: 118000
|
|
| Mistral-Small-3.2-24B-Instruct-2506 |
mistral-small-3.2-24b-instruct-2506
|
0.10 |
0.31 |
Provider: OVHcloud AI Endpoints, Context: 128000, Output Limit: 128000
|
|
| Qwen2.5-Coder-32B-Instruct |
qwen2.5-coder-32b-instruct
|
0.96 |
0.96 |
Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
|
|
| Qwen3-Coder-30B-A3B-Instruct |
qwen3-coder-30b-a3b-instruct
|
0.07 |
0.26 |
Provider: OVHcloud AI Endpoints, Context: 256000, Output Limit: 256000
|
|
| llava-next-mistral-7b |
llava-next-mistral-7b
|
0.32 |
0.32 |
Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
|
|
| DeepSeek-R1-Distill-Llama-70B |
deepseek-r1-distill-llama-70b
|
0.74 |
0.74 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| Meta-Llama-3_1-70B-Instruct |
meta-llama-3_1-70b-instruct
|
0.74 |
0.74 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| gpt-oss-20b |
gpt-oss-20b
|
0.05 |
0.18 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| gpt-oss-120b |
gpt-oss-120b
|
0.09 |
0.47 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| Meta-Llama-3_3-70B-Instruct |
meta-llama-3_3-70b-instruct
|
0.74 |
0.74 |
Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000
|
|
| Qwen3-32B |
qwen3-32b
|
0.09 |
0.25 |
Provider: OVHcloud AI Endpoints, Context: 32000, Output Limit: 32000
|
|
| llava-v1.6-mistral-7b-hf |
llava-v1.6-mistral-7b-hf
|
0.29 |
0.29 |
Source: ovhcloud, Context: 32000
|
|
| mamba-codestral-7B-v0.1 |
mamba-codestral-7b-v0.1
|
0.19 |
0.19 |
Source: ovhcloud, Context: 256000
|
|