Model profile
deepseek

DeepSeek: DeepSeek V3.1

DeepSeek: DeepSeek V3.1 is a budget text-first model from deepseek with a heavy runtime profile, standard context posture, and the clearest fit around long-context research / agent workflows.

Best for: Long-context research / Agent workflowsHeavy latencyStandard contextBudget pricing
Intelligence
28.1

Benchmark blend

Coding
28.4

Dev workflow signal

Context
33K Tokens

Standard

Input Price
$0.56

Budget tier

Decision snapshot
44

DeepSeek: DeepSeek V3.1 currently reads as a budget text-first option with standard context and a heavy runtime profile.

Overall profile
Use-case specific
Best for
Long-context research / Agent workflows
Latency tier
Heavy
Price tier
Budget
Source coverage
OpenRouterArtificial Analysis

Decision Strip

Decision rail before the raw tables

Core buy-side signals stay in one pass. The rest of the page expands only after intelligence, speed, context, and price are clear.

Intelligence
28.1
28

General reasoning and benchmark headroom.

Limited
Speed
N/A
46

Latency data is partial.

Situational
Context
33K Tokens
48

How much prompt and task state can stay in view.

Situational
Price
$0.56
86

$1.67 output / 1M

Efficient

Editorial Profile

DeepSeek: DeepSeek V3.1 in one narrative

Positioning, tradeoffs, and fit are consolidated into one read instead of repeating the same story across separate cards.

Use-case specificCoding score 28Math score 50

DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows. It succeeds the [DeepSeek V3-0324](/deepseek/deepseek-chat-v3-0324) model and performs well on a variety of tasks.

Identity

deepseek text-first profile

Positioning

Long-context research / Agent workflows with standard context and heavy runtime.

Cost posture

Efficient spend profile. More comfortable for sustained prompt volume if the capability fit is right.

Strengths
  • The available source data suggests a balanced profile rather than one dominant edge.

Tradeoffs
  • Budget-friendly input pricing is a strength, but raw capability may vary by workload.

  • Latency profile is better for deliberate runs than rapid back-and-forth chat.

  • Current metadata points to a text-first profile rather than a broad multimodal one.

  • Context window is more comfortable for focused tasks than extremely long sessions.

Best fit
  • Focused chat, retrieval-augmented flows, and narrower production tasks.

Compare Next

Similar profiles worth opening next

deepseek

DeepSeek: DeepSeek V3.2 Speciale

Intelligence
34.1
Context
164K Tokens
Input Price
$0.00
deepseek

DeepSeek: DeepSeek V3.2 Exp

Intelligence
32.9
Context
164K Tokens
Input Price
$0.28
deepseek

DeepSeek: DeepSeek V3.2

Intelligence
32.1
Context
164K Tokens
Input Price
$0.28

Benchmarks

Grouped by job-to-be-done

Only benchmark categories with actual signal are shown. Secondary values stay as simple definitions instead of nested micro-cards.

General intelligence

Broad reasoning, knowledge depth, and flagship benchmark posture.

Intelligence Index
28.1
MMLU Pro
83.3%
GPQA
73.5%
HLE
6.3%
Coding

Software implementation, debugging quality, and coding benchmark signal.

Coding Index
28.4
LiveCodeBench
0.577
SciCode
36.7%
Math

Formal reasoning, structured problem solving, and competition-style math.

Math Index
49.7
AIME 2025
49.7%
Agent / tool use

Long-horizon execution quality and interactive benchmark evidence.

IFBench
37.8%
TAU2
34.8%
TerminalBench Hard
24.2%
LCR
45.0%

Specs & Pricing

Technical snapshot and cost posture

Specs stay neutral, pricing gets emphasis through values rather than extra containers. Raw provider internals remain in metadata at the end.

Technical snapshot
Context Window
33K Tokens
Vision
Text-first
Modalities
text->text, text
Tokenizer
DeepSeek
Max Completion
7168
Moderation
No
Supported Parameters
frequency_penaltyinclude_reasoninglogit_biaslogprobsmax_tokensmin_ppresence_penaltyreasoningrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_logprobstop_p
Input Modalities
text
Output Modalities
text
Price architecture
Input
per 1M input tokens
$0.56
Output
per 1M output tokens
$1.67
Blended
AA 3:1 mix
$0.83

This model is relatively efficient on price. It is the easier fit when sustained prompt volume matters.

Metadata

Raw source tables at the end

Verification details remain available, but the page no longer forces them ahead of the editorial read.