← Back to all models

Gemini 2.0 Flash

gemini-2.0-flash-001

Provider: GitHub Copilot, Context: 1000000, Output Limit: 8192

Available at 6 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
githubcopilot models-dev Input: $0.00 Output: $0.00 Provider: GitHub Copilot, Context: 1000000, Output Limit: 8192
abacus models-dev Input: $0.10 Output: $0.40 Provider: Abacus, Context: 1000000, Output Limit: 8192
deepinfra litellm Input: $0.10 Output: $0.40 Source: deepinfra, Context: 1000000
vertex litellm Input: $0.15 Output: $0.60 Source: vertex, Context: 1048576
gemini litellm Input: $0.10 Output: $0.40 Source: gemini, Context: 1048576
openrouter openrouter Input: $0.10 Output: $0.40 Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences. Context: 1048576