grok-code-fast-1
xAI's latest coding model that offers fast agentic coding with a 256K context window.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $0.20 | Output: $1.50 | xAI's latest coding model that offers fast agentic coding with a 256K context window. | |
| poe | poe | Input: - | Output: - | Grok-Code-Fast-1 from xAI is a high-performance, cost-efficient model designed for agentic coding. It offers visible reasoning traces, strong steerability, and supports a 256k context window. | |
| xai | models-dev | Input: $0.20 | Output: $1.50 | Provider: xAI, Context: 256000, Output Limit: 10000 | |
| githubcopilot | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Copilot, Context: 128000, Output Limit: 64000 | |
| abacus | models-dev | Input: $0.20 | Output: $1.50 | Provider: Abacus, Context: 256000, Output Limit: 16384 | |
| venice | models-dev | Input: $0.25 | Output: $1.87 | Provider: Venice AI, Context: 262144, Output Limit: 65536 | |
| azure | models-dev | Input: $0.20 | Output: $1.50 | Provider: Azure, Context: 256000, Output Limit: 10000 | |
| helicone | models-dev | Input: $0.20 | Output: $1.50 | Provider: Helicone, Context: 256000, Output Limit: 10000 | |
| zenmux | models-dev | Input: $0.20 | Output: $1.50 | Provider: ZenMux, Context: 256000, Output Limit: 64000 | |
| azurecognitiveservices | models-dev | Input: $0.20 | Output: $1.50 | Provider: Azure Cognitive Services, Context: 256000, Output Limit: 10000 | |
| azureai | litellm | Input: $3.50 | Output: $17.50 | Source: azure_ai, Context: 131072 | |
| openrouter | openrouter | Input: $0.20 | Output: $1.50 | Grok Code Fast 1 is a speedy and economical reasoning model that excels at agentic coding. With reasoning traces visible in the response, developers can steer Grok Code for high-quality work flows. Context: 256000 |