About
A practical LLM cost simulator for builders.
llmcalc.app exists because LLM pricing has gotten genuinely complicated. The headline "$X per million tokens" rarely matches what you actually pay. Cache reads, cache writes, batch discounts, long-context tiers, reasoning tokens, retries, multi-turn loops, all of these compound, and most calculators ignore them.
What this is
- A multi-model token counter that runs entirely in your browser.
- A cost calculator with input/output/cache/batch/long-context fields modeled correctly.
- Three differentiated calculators: prompt-caching ROI, batch decision tool, and a multi-turn agent-loop simulator with workload templates.
- Model and comparison reference pages, all driven from a single
models.json.
What it isn't
- A replacement for your provider's invoice. Always reconcile against the real bill.
- A live monitor. Pricing data is verified periodically, not in real time.
- Affiliated with OpenAI, Anthropic, Google, or Meta. We're a third party.
How the data is kept honest
Every model in our index
carries two things you should look at: a last_verified
date and a confidence flag (official or inferred).
The changelog tracks pricing changes
and verification dates publicly.
Spot a wrong number? Use the report incorrect price link in the footer (or anywhere on the site). The mailto template prefills model, field, and source so it takes 30 seconds. Goes straight to the maintainer's inbox.
Tracked models
Currently 10 frontier models across Anthropic, OpenAI, Google, and Meta. See the full list.
Tech
Static site (Astro + Tailwind). Tokenization runs client-side via tiktoken-class libraries. No analytics cookies. Privacy-respecting traffic counts via Cloudflare Web Analytics.