Skip to content
AristoAiStack

🧮 LLM Cost Calculator

Compare API pricing across 23 models from OpenAI, Anthropic, Google, DeepSeek, xAI & Mistral. Updated 2026-02-18

Quick presets:
10K
1K
10K
Cheapest
Most Expensive
Savings Potential
# Model Provider Input / 1M Output / 1M Cost / Request Monthly Cost Context

📊 Monthly Cost Comparison

Frequently Asked Questions

How is the cost calculated?

Cost = (input tokens × input price per token) + (output tokens × output price per token). Prices are per 1 million tokens. Monthly cost multiplies the per-request cost by your estimated monthly requests.

Which LLM is cheapest for a chatbot?

For a typical chatbot (10K input, 1K output tokens), DeepSeek V3.2, Gemini 2.5 Flash-Lite, and GPT-4.1 Nano are the most affordable. DeepSeek V3.2 is remarkable — it rivals premium models at a fraction of the cost.

Do these prices include prompt caching?

No — these are standard API prices. Most providers offer prompt caching (50-90% savings on cached tokens) and batch processing (50% discount). Actual costs can be significantly lower with optimization.

How often is pricing data updated?

We verify pricing against official provider documentation regularly. Last update: 2026-02-18. AI pricing changes frequently — always verify current rates on provider websites before making purchasing decisions.

What about reasoning tokens (o-series, DeepSeek R1)?

Reasoning models (OpenAI o3/o4-mini, DeepSeek R1) generate internal "thinking" tokens that are billed as output tokens. The actual output you see may be shorter, but you're charged for the reasoning process. This makes per-request costs harder to predict.