phi-3-mini-128k-instruct
Provider: GitHub Models, Context: 128000, Output Limit: 4096
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| githubmodels | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Models, Context: 128000, Output Limit: 4096 | |
| azure | models-dev | Input: $0.13 | Output: $0.52 | Provider: Azure, Context: 128000, Output Limit: 4096 | |
| azurecognitiveservices | models-dev | Input: $0.13 | Output: $0.52 | Provider: Azure Cognitive Services, Context: 128000, Output Limit: 4096 | |
| azureai | litellm | Input: $0.13 | Output: $0.52 | Source: azure_ai, Context: 128000 | |
| fireworksai | litellm | Input: $0.10 | Output: $0.10 | Source: fireworks_ai, Context: 131072 | |
| openrouter | openrouter | Input: $0.10 | Output: $0.10 | Phi-3 Mini is a powerful 3.8B parameter model designed for advanced language understanding, reasoning, and instruction following. Optimized through supervised fine-tuning and preference adjustments, it excels in tasks involving common sense, mathematics, logical reasoning, and code processing. At time of release, Phi-3 Medium demonstrated state-of-the-art performance among lightweight models. This model is static, trained on an offline dataset with an October 2023 cutoff date. Context: 128000 |