AI Prompt Token Cost Calculator
Estimate the cost of AI API calls based on input/output token usage across popular models.
~750 words ≈ 1,000 tokens. A typical prompt is 100–2,000 tokens.
A short reply ≈ 100–500 tokens; a long essay ≈ 1,000–4,000 tokens.
Formula
Input Cost per Request = (Input Tokens ÷ 1,000) × Input Price per 1K Tokens
Output Cost per Request = (Output Tokens ÷ 1,000) × Output Price per 1K Tokens
Total Cost per Request = Input Cost per Request + Output Cost per Request
Total Cost = Total Cost per Request × Number of Requests
Example: 500 input tokens + 300 output tokens × 1,000 requests on GPT-4o mini:
Input = (500/1000) × $0.00015 = $0.000075/req → $0.075 total
Output = (300/1000) × $0.0006 = $0.00018/req → $0.18 total
Grand Total = $0.255
Assumptions & References
- Prices are in USD and reflect publicly listed API pricing as of early 2025; always verify current prices on the provider's pricing page.
- Token counts are approximate: ~1 token ≈ 4 characters or ¾ of a word in English (OpenAI tokenizer rule of thumb).
- 750 words ≈ 1,000 tokens is a widely used approximation for English text.
- Prices shown are per 1,000 tokens (per-mille), the standard billing unit for most providers.
- System prompts, function/tool definitions, and conversation history all count as input tokens.
- Some models (e.g., o1) charge separately for reasoning tokens, which are not included here.
- Batch API discounts (e.g., OpenAI Batch API offers 50% off) are not applied in this calculator.
- Free-tier limits (e.g., Gemini Flash-8B) are subject to rate limits and may change.
- Sources: OpenAI Pricing, Anthropic Pricing, Google AI Pricing.