glm-4.6v
GLM-4.6V series are Z.ai’s iterations in a multimodal large language model. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales.
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| vercel | vercel | Input: $0.30 | Output: $0.90 | GLM-4.6V series are Z.ai’s iterations in a multimodal large language model. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. | |
| poe | poe | Input: - | Output: - | GLM-4.6V represents a significant multimodal advancement in the GLM series, achieving state-of-the-art visual understanding accuracy for models of its parameter scale. Notably, it's the first visual model to natively integrate Function Call capabilities directly into its architecture, creating a seamless pathway from visual perception to executable actions. This breakthrough establishes a unified technical foundation for deploying multimodal agents in real-world business applications. File Support: Text, Markdown, Image and PDF files Context window: 131k tokens Optional parameters: Enable Thinking - Toggle this on for the model to think before providing a response. This is disabled by default Temperature - Controls randomness in the response. Lower values make the output more focused and deterministic. Select from 0 to 2 range. This is set to 0.7 by default. Max Output Tokens: Maximum number of tokens to generate in the response. This can be set from 1 to 32768. Set to Max token at 32768 by default. | |
| chutes | models-dev | Input: $0.30 | Output: $0.90 | Provider: Chutes, Context: 131072, Output Limit: 65536 | |
| zenmux | models-dev | Input: $0.14 | Output: $0.42 | Provider: ZenMux, Context: 200000, Output Limit: 64000 | |
| zhipuai | models-dev | Input: $0.30 | Output: $0.90 | Provider: Zhipu AI, Context: 128000, Output Limit: 32768 | |
| openrouter | openrouter | Input: $0.30 | Output: $0.90 | GLM-4.6V is a large multimodal model designed for high-fidelity visual understanding and long-context reasoning across images, documents, and mixed media. It supports up to 128K tokens, processes complex page layouts and charts directly as visual inputs, and integrates native multimodal function calling to connect perception with downstream tool execution. The model also enables interleaved image-text generation and UI reconstruction workflows, including screenshot-to-HTML synthesis and iterative visual editing. Context: 131072 | |
| zai | zai | Input: $0.30 | Output: $0.05 | - |