intellect-3
Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack. Achieving state-of-the-art performance for its size across math, code and reasoning.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $0.20 | Output: $1.10 | Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack. Achieving state-of-the-art performance for its size across math, code and reasoning. | |
| cortecs | models-dev | Input: $0.22 | Output: $1.20 | Provider: Cortecs, Context: 128000, Output Limit: 128000 | |
| openrouter | openrouter | Input: $0.20 | Output: $1.10 | INTELLECT-3 is a 106B-parameter Mixture-of-Experts model (12B active) post-trained from GLM-4.5-Air-Base using supervised fine-tuning (SFT) followed by large-scale reinforcement learning (RL). It offers state-of-the-art performance for its size across math, code, science, and general reasoning, consistently outperforming many larger frontier models. Designed for strong multi-step problem solving, it maintains high accuracy on structured tasks while remaining efficient at inference thanks to its MoE architecture. Context: 131072 |