← Back to all models

Mistral Small 24B Instruct 2501

mistral-small-24b-instruct-2501

Provider: Chutes, Context: 32768, Output Limit: 32768

Available at 4 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
chutes models-dev Input: $0.03 Output: $0.11 Provider: Chutes, Context: 32768, Output Limit: 32768
deepinfra litellm Input: $0.05 Output: $0.08 Source: deepinfra, Context: 32768
fireworksai litellm Input: $0.90 Output: $0.90 Source: fireworks_ai, Context: 32768
openrouter openrouter Input: $0.03 Output: $0.11 Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. [Read the blog post about the model here.](https://mistral.ai/news/mistral-small-3/) Context: 32768