grok-4
xAI's latest and greatest flagship model, offering unparalleled performance in natural language, math and reasoning - the perfect jack of all trades.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $3.00 | Output: $15.00 | xAI's latest and greatest flagship model, offering unparalleled performance in natural language, math and reasoning - the perfect jack of all trades. | |
| poe | poe | Input: $3.00 | Output: $15.00 | Grok 4 is xAI's latest and most intelligent language model. It features state-of-the-art capabilities in coding, reasoning, and answering questions. It excels at handling complex and multi-step tasks. Reasoning traces are not available via the xAI API. | |
| xai | models-dev | Input: $3.00 | Output: $15.00 | Provider: xAI, Context: 256000, Output Limit: 64000 | |
| azure | models-dev | Input: $3.00 | Output: $15.00 | Provider: Azure, Context: 256000, Output Limit: 64000 | |
| helicone | models-dev | Input: $3.00 | Output: $15.00 | Provider: Helicone, Context: 256000, Output Limit: 256000 | |
| fastrouter | models-dev | Input: $3.00 | Output: $15.00 | Provider: FastRouter, Context: 256000, Output Limit: 64000 | |
| zenmux | models-dev | Input: $3.00 | Output: $15.00 | Provider: ZenMux, Context: 256000, Output Limit: 64000 | |
| requesty | models-dev | Input: $3.00 | Output: $15.00 | Provider: Requesty, Context: 256000, Output Limit: 64000 | |
| azurecognitiveservices | models-dev | Input: $3.00 | Output: $15.00 | Provider: Azure Cognitive Services, Context: 256000, Output Limit: 64000 | |
| azureai | litellm | Input: $5.50 | Output: $27.50 | Source: azure_ai, Context: 131072 | |
| openrouter | openrouter | Input: $3.00 | Output: $15.00 | Grok 4 is xAI's latest reasoning model with a 256k context window. It supports parallel tool calling, structured outputs, and both image and text inputs. Note that reasoning is not exposed, reasoning cannot be disabled, and the reasoning effort cannot be specified. Pricing increases once the total tokens in a given request is greater than 128k tokens. See more details on the [xAI docs](https://docs.x.ai/docs/models/grok-4-0709) Context: 256000 |