gemini-2.5-pro
Gemini 2.5 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 2.5 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, and even entire code repositories.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $1.25 | Output: $10.00 | Gemini 2.5 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 2.5 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, and even entire code repositories. | |
| poe | poe | Input: $0.87 | Output: $7.00 | Gemini 2.5 Pro is Google's advanced model with frontier performance on various key benchmarks; supports web search and 1 million tokens of input context. To instruct the bot to use more thinking effort, add --thinking_budget and a number ranging from 0 to 32,768 to the end of your message. Use `--web_search true` to enable web search and real-time information access, this is disabled by default. | |
| githubcopilot | models-dev | Input: $0.00 | Output: $0.00 | Provider: GitHub Copilot, Context: 128000, Output Limit: 64000 | |
| abacus | models-dev | Input: $1.25 | Output: $10.00 | Provider: Abacus, Context: 1048576, Output Limit: 65536 | |
| cortecs | models-dev | Input: $1.65 | Output: $11.02 | Provider: Cortecs, Context: 1048576, Output Limit: 65535 | |
| helicone | models-dev | Input: $1.25 | Output: $10.00 | Provider: Helicone, Context: 1048576, Output Limit: 65536 | |
| fastrouter | models-dev | Input: $1.25 | Output: $10.00 | Provider: FastRouter, Context: 1048576, Output Limit: 65536 | |
| models-dev | Input: $1.25 | Output: $10.00 | Provider: Google, Context: 1048576, Output Limit: 65536 | ||
| googlevertex | models-dev | Input: $1.25 | Output: $10.00 | Provider: Vertex, Context: 1048576, Output Limit: 65536 | |
| zenmux | models-dev | Input: $1.25 | Output: $10.00 | Provider: ZenMux, Context: 1048576, Output Limit: 65536 | |
| requesty | models-dev | Input: $1.25 | Output: $10.00 | Provider: Requesty, Context: 1048576, Output Limit: 65536 | |
| sapaicore | models-dev | Input: $1.25 | Output: $10.00 | Provider: SAP AI Core, Context: 1048576, Output Limit: 65536 | |
| aihubmix | models-dev | Input: $1.25 | Output: $5.00 | Provider: AIHubMix, Context: 2000000, Output Limit: 65000 | |
| deepinfra | litellm | Input: $1.25 | Output: $10.00 | Source: deepinfra, Context: 1000000 | |
| vertex | litellm | Input: $1.25 | Output: $10.00 | Source: vertex, Context: 1048576 | |
| gemini | litellm | Input: $1.25 | Output: $10.00 | Source: gemini, Context: 1048576 | |
| openrouter | openrouter | Input: $1.25 | Output: $10.00 | Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities. Context: 1048576 |