← Back to all models

o3-mini

o3-mini

o3-mini is OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini.

Available at 11 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
vercel vercel Input: $1.10 Output: $4.40 o3-mini is OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini.
poe poe Input: $0.99 Output: $4.00 o3-mini is OpenAI's reasoning model, providing high intelligence on a variety of tasks and domains, including science, math, and coding. This bot uses medium reasoning effort by default but low, medium & high can be selected; supports 200k tokens of input context and 100k tokens of output context. To instruct the bot to use more reasoning effort, add --reasoning_effort to the end of your message with one of "low", "medium", or "high".
githubcopilot models-dev Input: $0.00 Output: $0.00 Provider: GitHub Copilot, Context: 128000, Output Limit: 65536
abacus models-dev Input: $1.10 Output: $4.40 Provider: Abacus, Context: 200000, Output Limit: 100000
githubmodels models-dev Input: $0.00 Output: $0.00 Provider: GitHub Models, Context: 200000, Output Limit: 100000
azure models-dev Input: $1.10 Output: $4.40 Provider: Azure, Context: 200000, Output Limit: 100000
helicone models-dev Input: $1.10 Output: $4.40 Provider: Helicone, Context: 200000, Output Limit: 100000
cloudflareaigateway models-dev Input: $1.10 Output: $4.40 Provider: Cloudflare AI Gateway, Context: 200000, Output Limit: 100000
openai models-dev Input: $1.10 Output: $4.40 Provider: OpenAI, Context: 200000, Output Limit: 100000
azurecognitiveservices models-dev Input: $1.10 Output: $4.40 Provider: Azure Cognitive Services, Context: 200000, Output Limit: 100000
openrouter openrouter Input: $1.10 Output: $4.40 OpenAI o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. This model supports the `reasoning_effort` parameter, which can be set to "high", "medium", or "low" to control the thinking time of the model. The default is "medium". OpenRouter also offers the model slug `openai/o3-mini-high` to default the parameter to "high". The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not include vision processing capabilities. The model demonstrates significant improvements over its predecessor, with expert testers preferring its responses 56% of the time and noting a 39% reduction in major errors on complex questions. With medium reasoning effort settings, o3-mini matches the performance of the larger o1 model on challenging reasoning evaluations like AIME and GPQA, while maintaining lower latency and cost. Context: 200000