llama-3.1-8b-instruct
Provider: Helicone, Context: 16384, Output Limit: 16384
| Provider | Source | Input Price ($/1M) | Output Price ($/1M) | Description | Free |
|---|---|---|---|---|---|
| helicone | models-dev | Input: $0.02 | Output: $0.05 | Provider: Helicone, Context: 16384, Output Limit: 16384 | |
| cloudflareworkersai | models-dev | Input: $0.28 | Output: $0.83 | Provider: Cloudflare Workers AI, Context: 7968, Output Limit: 7968 | |
| wandb | models-dev | Input: $0.22 | Output: $0.22 | Provider: Weights & Biases, Context: 128000, Output Limit: 32768 | |
| cloudflareaigateway | models-dev | Input: $0.28 | Output: $0.83 | Provider: Cloudflare AI Gateway, Context: 128000, Output Limit: 16384 | |
| ovhcloud | models-dev | Input: $0.11 | Output: $0.11 | Provider: OVHcloud AI Endpoints, Context: 131000, Output Limit: 131000 | |
| synthetic | models-dev | Input: $0.20 | Output: $0.20 | Provider: Synthetic, Context: 128000, Output Limit: 32768 | |
| inference | models-dev | Input: $0.03 | Output: $0.03 | Provider: Inference, Context: 16000, Output Limit: 4096 | |
| scaleway | models-dev | Input: $0.20 | Output: $0.20 | Provider: Scaleway, Context: 128000, Output Limit: 16384 | |
| nscale | litellm | Input: $0.03 | Output: $0.03 | Source: nscale, Context: N/A | |
| perplexity | litellm | Input: $0.20 | Output: $0.20 | Source: perplexity, Context: 131072 | |
| openrouter | openrouter | Input: $0.02 | Output: $0.03 | Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). Context: 131072 |