Model profile
inception

Inception: Mercury

Inception: Mercury is a budget text-first model from inception with a heavy runtime profile, extended context posture, and the clearest fit around long-context research / reasoning.

Best for: Long-context research / ReasoningHeavy latencyExtended contextBudget pricing
Intelligence
N/A

Benchmark blend

Coding
N/A

Dev workflow signal

Context
128K Tokens

Extended

Input Price
$0.25

Budget tier

Decision snapshot
56

Inception: Mercury currently reads as a budget text-first option with extended context and a heavy runtime profile.

Overall profile
Selective fit
Best for
Long-context research / Reasoning
Latency tier
Heavy
Price tier
Budget
Source coverage
OpenRouter

Decision Strip

Decision rail before the raw tables

Core buy-side signals stay in one pass. The rest of the page expands only after intelligence, speed, context, and price are clear.

Intelligence
N/A
44

General reasoning and benchmark headroom.

Situational
Speed
N/A
46

Latency data is partial.

Situational
Context
128K Tokens
76

How much prompt and task state can stay in view.

Competitive
Price
$0.25
86

$0.75 output / 1M

Efficient

Editorial Profile

Inception: Mercury in one narrative

Positioning, tradeoffs, and fit are consolidated into one read instead of repeating the same story across separate cards.

Selective fitCoding score 40Math score 36

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide responsive user experiences, including with voice agents, search interfaces, and chatbots. Read more in the [blog post] (https://www.inceptionlabs.ai/blog/introducing-mercury) here.

Identity

inception text-first profile

Positioning

Long-context research / Reasoning with extended context and heavy runtime.

Cost posture

Efficient spend profile. More comfortable for sustained prompt volume if the capability fit is right.

Strengths
  • Large context headroom supports repo-wide prompts and long research sessions.

Tradeoffs
  • Budget-friendly input pricing is a strength, but raw capability may vary by workload.

  • Latency profile is better for deliberate runs than rapid back-and-forth chat.

  • Current metadata points to a text-first profile rather than a broad multimodal one.

Best fit
  • Long-context summarization, repo analysis, and policy or document review.

Compare Next

Similar profiles worth opening next

inception

Inception: Mercury 2

Intelligence
32.8
Context
128K Tokens
Input Price
$0.25
inception

Inception: Mercury Coder

Intelligence
N/A
Context
128K Tokens
Input Price
$0.25

Benchmarks

Grouped by job-to-be-done

Only benchmark categories with actual signal are shown. Secondary values stay as simple definitions instead of nested micro-cards.

No benchmark data is available for this model yet.

Specs & Pricing

Technical snapshot and cost posture

Specs stay neutral, pricing gets emphasis through values rather than extra containers. Raw provider internals remain in metadata at the end.

Technical snapshot
Context Window
128K Tokens
Vision
Text-first
Modalities
text->text, text
Tokenizer
Other
Max Completion
32000
Moderation
No
Supported Parameters
max_tokensresponse_formatstopstructured_outputstemperaturetool_choicetools
Input Modalities
text
Output Modalities
text
Price architecture
Input
per 1M input tokens
$0.25
Output
per 1M output tokens
$0.75
Blended
AA 3:1 mix
N/A

This model is relatively efficient on price. It is the easier fit when sustained prompt volume matters.

OR Cache Read
$0.00

Metadata

Raw source tables at the end

Verification details remain available, but the page no longer forces them ahead of the editorial read.