Model profile
Xiaomi
New in 2026

Xiaomi: MiMo-V2.5-Pro

Xiaomi: MiMo-V2.5-Pro is a budget-priced text-first model from Xiaomi with balanced runtime profile, large context posture, and the clearest fit around long-context research / agent workflows.

Best for: Long-context research / Agent workflowsBalanced latencyLarge contextBudget pricing
Intelligence
53.8

Benchmark blend

Coding
45.5

Dev workflow signal

Context
1049K Tokens

Large

Input Price
$1.00

Budget tier

Decision snapshot
65

Xiaomi: MiMo-V2.5-Pro currently reads as a budget text-first option with large context and a balanced runtime profile.

Overall profile
Selective fit
Best for
Long-context research / Agent workflows
Latency tier
Balanced
Price tier
Budget
Source coverage
OpenRouterArtificial Analysis

Decision Strip

Decision rail before the raw tables

Core buy-side signals stay in one pass. The rest of the page expands only after intelligence, speed, context, and price are clear.

Intelligence
53.8
54

General reasoning and benchmark headroom.

Situational
Speed
65 tok/s
53

TTFT 1.84s

Situational
Context
1049K Tokens
100

How much prompt and task state can stay in view.

Above average
Price
$1.00
86

$3.00 output / 1M

Efficient

Editorial Profile

Xiaomi: MiMo-V2.5-Pro in one narrative

Positioning, tradeoffs, and fit are consolidated into one read instead of repeating the same story across separate cards.

Selective fitCoding score 46Math score N/A

MiMo-V2.5-Pro is Xiaomi’s flagship model, delivering strong performance in general agentic capabilities, complex software engineering, and long-horizon tasks, with top rankings on benchmarks such as ClawEval, GDPVal, and SWE-bench Pro....

Identity

Xiaomi text-first profile

Positioning

Long-context research / Agent workflows with large context and balanced runtime.

Cost posture

Efficient spend profile. More comfortable for sustained prompt volume if the capability fit is right.

Strengths
  • Large context headroom supports repo-wide prompts and long research sessions.

Tradeoffs
  • Budget-friendly input pricing is a strength, but raw capability may vary by workload.

  • Latency is balanced rather than ultra-fast, which is fine for most workflows but not the snappiest tier.

  • Current metadata points to a text-first profile rather than a broad multimodal one.

Best fit
  • Long-context summarization, repo analysis, and policy or document review.

Explore Next

Similar profiles worth opening next

Xiaomi

Xiaomi: MiMo-V2-Pro

Intelligence
49.2
Context
1049K Tokens
Input Price
$0.00
Xiaomi

MiMo-V2-Omni-0327

Intelligence
44.9
Context
N/A
Input Price
$0.00
Xiaomi

Xiaomi: MiMo-V2-Omni

Intelligence
43.4
Context
262K Tokens
Input Price
$0.00

Benchmarks

Grouped by job-to-be-done

Only benchmark categories with actual signal are shown. Secondary values stay as simple definitions instead of nested micro-cards.

General intelligence

Broad reasoning, knowledge depth, and flagship benchmark posture.

Intelligence Index
53.8
GPQA
86.6%
HLE
33.8%
Coding

Software implementation, debugging quality, and coding benchmark signal.

Coding Index
45.5
SciCode
50.2%
Agent / tool use

Long-horizon execution quality and interactive benchmark evidence.

IFBench
79.9%
TAU2
94.2%
TerminalBench Hard
43.2%
LCR
73.3%

Specs & Pricing

Technical snapshot and cost posture

Specs stay neutral, pricing gets emphasis through values rather than extra containers. Raw provider internals remain in metadata at the end.

Technical snapshot
Context Window
1049K Tokens
Vision
Text-first
Modalities
text
Tokenizer
Other
Max Completion
131072
Moderation
No
Supported Parameters
frequency_penaltyinclude_reasoningmax_tokenspresence_penaltyreasoningresponse_formatstoptemperaturetool_choicetoolstop_p
Input Modalities
text
Output Modalities
text
Price architecture
Input
per 1M input tokens
$1.00
Output
per 1M output tokens
$3.00
Blended
AA 3:1 mix
$1.50

This model is relatively efficient on price. It is the easier fit when sustained prompt volume matters.

OR Cache Read
$0.00

Metadata

Raw source tables at the end

Verification details remain available, but the page no longer forces them ahead of the editorial read.