Sonu Sahani logo
Sonusahani.com

Mistral API Pricing Calculator

Dynamically estimate Mistral API billing spanning Mistral Large 3, Devstral coding agents, Voxtral voice streaming metrics, and Native Batching.

Input: $0.5 / 1MOutput: $1.5 / 1M

Core LLM Execution
Autonomous Tools & Plugins

Add native Tool-use invocations (e.g. Agentic Search, OCR Pages, or internal code logic calls). Priced universally:

Web Search
$30/1k
Code Execution
$30/1k
Premium News Index
$50/1k
Image Generator Node
$100/1k images
OCR Docs (Extract)
$2/1k pages
OCR Docs (Annotate)
$3/1k pages

API Estimation Matrix

Base Input Metric$0.0007
Data Output Completion$0.001
Agent Tools Extrapolation$0.00
Total Pipeline Cost$0.0017

All Mistral Models Overview

Mistral Large 3

Flagship multimodal and multilingual model, open-weight and general-purpose.

Mistral Small 4

New standard for multimodal, reasoning-optimized models with agentic capabilities.

Mistral Medium 3

State-of-the-art performance for simplified enterprise deployments and cost-efficiency.

OCR 3

The world’s best document extraction and understanding model with multimodal vision.

Devstral 2

Enhanced model for advanced coding agents and autonomous software engineering.

Devstral Small 2

The best open-source model optimized specifically for lightweight coding agents.

Codestral

Lightweight, fast model proficient in over 80 programming languages.

Leanstral

The first open-source code agent specifically designed for the Lean 4 language.

Mistral Small 3.2

Multimodal and multilingual SOTA model licensed under Apache 2.0.

Magistral Medium

Reasoning model excelling in domain-specific, transparent reasoning tasks.

Magistral Small

Lightweight reasoning model offering transparent and multilingual understanding.

Ministral 3

Family of models (3B, 8B, 14B) designed for frontier AI performance on the edge.

Voxtral TTS

State-of-the-art text-to-speech engine for natural voice generation.

Voxtral Realtime

Instant voice clarity and transcription built for low-latency edge use cases.

Voxtral Small

Speech-to-text model with multimodal audio and text understanding.

Mistral Moderation

A specialized classifier service for precise text content moderation.

Pixtral Large

Vision-capable large model with frontier reasoning capabilities.

Mistral NeMo

Small-form state-of-the-art model trained specifically for code tasks.

Mixtral 8x22B

The most performant open model currently available, using Mixture-of-Experts.

Mistral 2026 Core Mechanics

Native Tools & Agentic APIs

Mistral’s Agent API enhances AI workflows with built-in tools for code execution, web search, image generation, and persistent library memory. These specialized tools are billed per invocation (e.g., $30 per 1,000 web searches) additionally to standard token context processing.

Regional Data Residency

Enterprise APIs introduce regional data processing controls and system-level SLAs. Mistral offers premium news access ($50/1k calls) and data capture features ($0.04/M tokens) to allow for continuous debugging and optimization in sensitive enterprise environments.

OCR 3 & Document Extraction

OCR 3 is specialized for complex document understanding. It charges $2 per 1,000 pages for standard extraction and $3 per 1,000 pages for page annotations. Batch API users can save an additional 50% on these document processing tasks.

Voxtral Voice Capabilities

The Voxtral family supports everything from text-to-speech ($0.016 per 1k characters) to real-time transcription. Multimodal variants like Voxtral Small allow you to process audio and text simultaneously in a single chat completion turn natively.

Frequently Asked Questions

Technical details based on Mistral's latest 2026 API documentation.

How does the Batch API discount work?

You can save 50% on all token costs (input and output) by using Mistral’s Batch API for non-time-sensitive requests. Requests are typically processed within 24 hours. Note that while OCR, text, and reasoning models support batch discounts, image generation is billed at standard rates.

What is Mistral Large 3's context window?

Mistral Large 3 supports an expansive context window of 128,000 tokens as a standard, with higher limits available for Enterprise users. Open-weight versions like Mixtral 8x22B also support massive context handling natively.

Are there additional fees for usage guideline violations?

Mistral charges a $0.05 fee per request for violations caught before generation in the Responses API. If a violation is deemed to have occurred during generation, the standard token costs for that generation still apply.

What programming languages does Codestral support?

Codestral is optimized for performance across over 80 programming languages, making it one of the most versatile coding models for both legacy and modern software stacks.

How are regional Enterprise APIs priced?

Enterprise APIs with regional controls and premium support are customized based on volume and requirements. They often include higher rate limits and guaranteed system-level SLAs for mission-critical deployments.

Does Mistral charge for model storage?

Yes, for fine-tuned models like Codestral or Ministral variants, Mistral charges a flat $2 per month per model for storage, in addition to the initial training costs (which range from $1 to $4 per Million tokens).