minimax-m2.1
MiniMax 2.1 is MiniMax's latest model, optimized specifically for robustness in coding, tool use, instruction following, and long-horizon planning.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $0.30 | Output: $1.20 | MiniMax 2.1 is MiniMax's latest model, optimized specifically for robustness in coding, tool use, instruction following, and long-horizon planning. | |
| poe | poe | Input: - | Output: - | MiniMax M2.1 is a cutting-edge AI model designed to revolutionize how developers build software. With enhanced multi-language programming support, it excels in generating high-quality code across popular languages like Rust, Java, Golang, C++, Kotlin, Objective-C, TypeScript, and JavaScript. Key improvements include: 22% faster response times and 30% lower token consumption for efficient workflows. Seamless integration with leading development frameworks (Claude Code, Droid Factory AI, BlackBox, etc.). Full-stack development capabilities, from mobile (Android/iOS) to web and 3D interactive prototyping. Optimized performance-to-cost ratio, making AI-assisted development more accessible. Whether you're a software engineer, app developer, or tech innovator, M2.1 empowers smarter coding with industry-leading AI. File Support: Text, Markdown and PDF files Context window: 205k tokens Optional parameters: Use `--enable_thinking true` to enable thinking about the response before giving a final answer. This is disabled by default. Use `--temperature` and set number from 0 to 2 to control randomness in the response. Lower values make the output more focused and deterministic. This is set to 0.7 by default Use `max_output_token` and set number from 1 to 131072 to set number of tokens to generate in response. This is set to 131072 by default. | |
| minimax | models-dev | Input: $0.30 | Output: $1.20 | Provider: MiniMax, Context: 204800, Output Limit: 131072 | |
| minimaxcn | models-dev | Input: $0.30 | Output: $1.20 | Provider: MiniMax (China), Context: 204800, Output Limit: 131072 | |
| zenmux | models-dev | Input: $0.30 | Output: $1.20 | Provider: ZenMux, Context: 204800, Output Limit: 64000 | |
| synthetic | models-dev | Input: $0.55 | Output: $2.19 | Provider: Synthetic, Context: 204800, Output Limit: 131072 | |
| nanogpt | models-dev | Input: $1.00 | Output: $2.00 | Provider: NanoGPT, Context: 128000, Output Limit: 8192 | |
| aihubmix | models-dev | Input: $0.29 | Output: $1.15 | Provider: AIHubMix, Context: 204800, Output Limit: 131072 | |
| openrouter | openrouter | Input: $0.12 | Output: $0.48 | MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency. Compared to its predecessor, M2.1 delivers cleaner, more concise outputs and faster perceived response times. It shows leading multilingual coding performance across major systems and application languages, achieving 49.4% on Multi-SWE-Bench and 72.5% on SWE-Bench Multilingual, and serves as a versatile agent “brain” for IDEs, coding tools, and general-purpose assistance. To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks). Context: 196608 |