← Back to all models

Llama 3.2 1b Instruct

llama-3.2-1b-instruct

Provider: Nvidia, Context: 128000, Output Limit: 4096

Available at 5 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
nvidia models-dev Input: $0.00 Output: $0.00 Provider: Nvidia, Context: 128000, Output Limit: 4096
cloudflareworkersai models-dev Input: $0.03 Output: $0.20 Provider: Cloudflare Workers AI, Context: 60000, Output Limit: 60000
cloudflareaigateway models-dev Input: $0.03 Output: $0.20 Provider: Cloudflare AI Gateway, Context: 128000, Output Limit: 16384
inference models-dev Input: $0.01 Output: $0.01 Provider: Inference, Context: 16000, Output Limit: 4096
openrouter openrouter Input: $0.03 Output: $0.20 Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance. Supporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/). Context: 60000