← Back to all models

Gemini 3 Flash

gemini-3-flash-preview

Provider: GitHub Copilot, Context: 128000, Output Limit: 64000

Available at 11 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
githubcopilot models-dev Input: $0.00 Output: $0.00 Provider: GitHub Copilot, Context: 128000, Output Limit: 64000
abacus models-dev Input: $0.50 Output: $3.00 Provider: Abacus, Context: 1048576, Output Limit: 65536
venice models-dev Input: $0.70 Output: $3.75 Provider: Venice AI, Context: 262144, Output Limit: 65536
google models-dev Input: $0.50 Output: $3.00 Provider: Google, Context: 1048576, Output Limit: 65536
googlevertex models-dev Input: $0.50 Output: $3.00 Provider: Vertex, Context: 1048576, Output Limit: 65536
zenmux models-dev Input: $0.50 Output: $3.00 Provider: ZenMux, Context: 1048576, Output Limit: 64000
requesty models-dev Input: $0.50 Output: $3.00 Provider: Requesty, Context: 1048576, Output Limit: 65536
vertex litellm Input: $0.50 Output: $3.00 Source: vertex, Context: 1048576
gemini litellm Input: $0.50 Output: $3.00 Source: gemini, Context: 1048576
openrouter openrouter Input: $0.50 Output: $3.00 Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models. Context: 1048576
factoryai factoryai Input: - Output: - -