llama-4-scout-17b-16e-instruct
Provider: Nvidia, Context: 128000, Output Limit: 4096
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| nvidia | models-dev | Input: $0.00 | Output: $0.00 | Provider: Nvidia, Context: 128000, Output Limit: 4096 | |
| groq | models-dev | Input: $0.11 | Output: $0.34 | Provider: Groq, Context: 131072, Output Limit: 8192 | |
| githubmodels | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Models, Context: 128000, Output Limit: 8192 | |
| azure | models-dev | Input: $0.20 | Output: $0.78 | Provider: Azure, Context: 128000, Output Limit: 8192 | |
| cloudflareworkersai | models-dev | Input: $0.27 | Output: $0.85 | Provider: Cloudflare Workers AI, Context: 131000, Output Limit: 131000 | |
| wandb | models-dev | Input: $0.17 | Output: $0.66 | Provider: Weights & Biases, Context: 64000, Output Limit: 8192 | |
| cloudflareaigateway | models-dev | Input: $0.27 | Output: $0.85 | Provider: Cloudflare AI Gateway, Context: 128000, Output Limit: 16384 | |
| synthetic | models-dev | Input: $0.15 | Output: $0.60 | Provider: Synthetic, Context: 328000, Output Limit: 4096 | |
| friendli | models-dev | Input: - | Output: - | Provider: Friendli, Context: 131072, Output Limit: 8000 | |
| azurecognitiveservices | models-dev | Input: $0.20 | Output: $0.78 | Provider: Azure Cognitive Services, Context: 128000, Output Limit: 8192 | |
| azureai | litellm | Input: $0.20 | Output: $0.78 | Source: azure_ai, Context: 10000000 | |
| deepinfra | litellm | Input: $0.08 | Output: $0.30 | Source: deepinfra, Context: 327680 | |
| lambdaai | litellm | Input: $0.05 | Output: $0.10 | Source: lambda_ai, Context: 16384 | |
| nscale | litellm | Input: $0.09 | Output: $0.29 | Source: nscale, Context: N/A | |
| sambanova | litellm | Input: $0.40 | Output: $0.70 | Source: sambanova, Context: 8192 |