← Back to all models

Ministral 3B

ministral-3b

A compact, efficient model for on-device tasks like smart assistants and local analytics, offering low-latency performance.

Available at 6 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
vercel vercel Input: $0.04 Output: $0.04 A compact, efficient model for on-device tasks like smart assistants and local analytics, offering low-latency performance.
githubmodels models-dev Input: $0.00 Output: $0.00 Provider: GitHub Models, Context: 128000, Output Limit: 8192
azure models-dev Input: $0.04 Output: $0.04 Provider: Azure, Context: 128000, Output Limit: 8192
azurecognitiveservices models-dev Input: $0.04 Output: $0.04 Provider: Azure Cognitive Services, Context: 128000, Output Limit: 8192
azureai litellm Input: $0.04 Output: $0.04 Source: azure_ai, Context: 128000
openrouter openrouter Input: $0.04 Output: $0.04 Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference. Context: 131072