deepseek-v3.2-exp
DeepSeek-V3.2-Exp is an experimental model introducing the groundbreaking DeepSeek Sparse Attention (DSA) mechanism for enhanced long-context processing efficiency. Built on V3.1-Terminus, DSA achieves fine-grained sparse attention while maintaining identical output quality.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $0.27 | Output: $0.40 | DeepSeek-V3.2-Exp is an experimental model introducing the groundbreaking DeepSeek Sparse Attention (DSA) mechanism for enhanced long-context processing efficiency. Built on V3.1-Terminus, DSA achieves fine-grained sparse attention while maintaining identical output quality. | |
| poe | poe | Input: $3,900.00 | Output: - | DeepSeek-V3.2-Exp is an experimental model introducing the groundbreaking DeepSeek Sparse Attention (DSA) mechanism for enhanced long-context processing efficiency. Built on V3.1-Terminus, DSA achieves fine-grained sparse attention while maintaining identical output quality. This delivers substantial computational efficiency improvements without compromising accuracy. Comprehensive benchmarks confirm V3.2-Exp matches V3.1-Terminus performance, proving efficiency gains don't sacrifice capability. As both a powerful tool and research platform, it establishes new paradigms for efficient long-context AI processing. Optional Parameters: Use additional input beside attachment button to manage the optional parameters: 1. Enable/Disable Thinking - This will cause the model to think about the response before giving a final answer. Technical Specifications: File Support: Text, Markdown and PDF files Context window: 160k tokens | |
| siliconflowcn | models-dev | Input: $0.27 | Output: $0.41 | Provider: SiliconFlow (China), Context: 164000, Output Limit: 164000 | |
| siliconflow | models-dev | Input: $0.27 | Output: $0.41 | Provider: SiliconFlow, Context: 164000, Output Limit: 164000 | |
| zenmux | models-dev | Input: $0.22 | Output: $0.33 | Provider: ZenMux, Context: 163840, Output Limit: 64000 | |
| openrouter | openrouter | Input: $0.21 | Output: $0.32 | DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs. Context: 163840 |