llama-3.2-90b-vision-instruct
Provider: GitHub Models, Context: 128000, Output Limit: 8192
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| githubmodels | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Models, Context: 128000, Output Limit: 8192 | |
| azure | models-dev | Input: $2.04 | Output: $2.04 | Provider: Azure, Context: 128000, Output Limit: 8192 | |
| ionet | models-dev | Input: $0.35 | Output: $0.40 | Provider: IO.NET, Context: 16000, Output Limit: 4096 | |
| azurecognitiveservices | models-dev | Input: $2.04 | Output: $2.04 | Provider: Azure Cognitive Services, Context: 128000, Output Limit: 8192 | |
| azureai | litellm | Input: $2.04 | Output: $2.04 | Source: azure_ai, Context: 128000 | |
| openrouter | openrouter | Input: $0.35 | Output: $0.40 | The Llama 90B Vision model is a top-tier, 90-billion-parameter multimodal model designed for the most challenging visual reasoning and language tasks. It offers unparalleled accuracy in image captioning, visual question answering, and advanced image-text comprehension. Pre-trained on vast multimodal datasets and fine-tuned with human feedback, the Llama 90B Vision is engineered to handle the most demanding image-based AI tasks. This model is perfect for industries requiring cutting-edge multimodal AI capabilities, particularly those dealing with complex, real-time visual and textual analysis. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/). Context: 32768 |