← Back to all models

Phi-4-multimodal-instruct

phi-4-multimodal-instruct

Provider: GitHub Models, Context: 128000, Output Limit: 4096

Available at 3 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
githubmodels models-dev Input: $0.00 Output: $0.00 Provider: GitHub Models, Context: 128000, Output Limit: 4096
azureai litellm Input: $0.08 Output: $0.32 Source: azure_ai, Context: 131072
openrouter openrouter Input: $0.05 Output: $0.10 Phi-4 Multimodal Instruct is a versatile 5.6B parameter foundation model that combines advanced reasoning and instruction-following capabilities across both text and visual inputs, providing accurate text outputs. The unified architecture enables efficient, low-latency inference, suitable for edge and mobile deployments. Phi-4 Multimodal Instruct supports text inputs in multiple languages including Arabic, Chinese, English, French, German, Japanese, Spanish, and more, with visual input optimized primarily for English. It delivers impressive performance on multimodal tasks involving mathematical, scientific, and document reasoning, providing developers and enterprises a powerful yet compact model for sophisticated interactive applications. For more information, see the [Phi-4 Multimodal blog post](https://azure.microsoft.com/en-us/blog/empowering-innovation-the-next-generation-of-the-phi-family/). Context: 131072