← Back to all models

Qwen/Qwen3-VL-235B-A22B-Instruct

qwen3-vl-235b-a22b-instruct

Provider: SiliconFlow (China), Context: 262000, Output Limit: 262000

Available at 6 Providers

Provider Source Input Price ($/1M) Output Price ($/1M) Description Free
siliconflowcn models-dev Input: $0.30 Output: $1.50 Provider: SiliconFlow (China), Context: 262000, Output Limit: 262000
chutes models-dev Input: $0.30 Output: $1.20 Provider: Chutes, Context: 262144, Output Limit: 262144
siliconflow models-dev Input: $0.30 Output: $1.50 Provider: SiliconFlow, Context: 262000, Output Limit: 262000
helicone models-dev Input: $0.30 Output: $1.50 Provider: Helicone, Context: 256000, Output Limit: 16384
fireworksai litellm Input: $0.22 Output: $0.88 Source: fireworks_ai, Context: 262144
openrouter openrouter Input: $0.12 Output: $0.56 Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language use (VQA, document parsing, chart/table extraction, multilingual OCR). The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning. Beyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows—turning sketches or mockups into code and assisting with UI debugging—while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents. Context: 262144