← Back to all models

Llama 4 Maverick 17B 128E Instruct FP8

llama-4-maverick-17b-128e-instruct-fp8

Provider: GitHub Models, Context: 128000, Output Limit: 8192

Available at 10 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
githubmodels models-dev Input: $0.00 Output: $0.00 Provider: GitHub Models, Context: 128000, Output Limit: 8192
azure models-dev Input: $0.25 Output: $1.00 Provider: Azure, Context: 128000, Output Limit: 8192
synthetic models-dev Input: $0.22 Output: $0.88 Provider: Synthetic, Context: 524000, Output Limit: 4096
ionet models-dev Input: $0.15 Output: $0.60 Provider: IO.NET, Context: 430000, Output Limit: 4096
azurecognitiveservices models-dev Input: $0.25 Output: $1.00 Provider: Azure Cognitive Services, Context: 128000, Output Limit: 8192
llama models-dev Input: $0.00 Output: $0.00 Provider: Llama, Context: 128000, Output Limit: 4096
azureai litellm Input: $1.41 Output: $0.35 Source: azure_ai, Context: 1000000
deepinfra litellm Input: $0.15 Output: $0.60 Source: deepinfra, Context: 1048576
lambdaai litellm Input: $0.05 Output: $0.10 Source: lambda_ai, Context: 131072
metallama litellm Input: $0.00 Output: $0.00 Source: meta_llama, Context: 1000000