Model profile
qwen
New in 2026

Qwen: Qwen3 Coder Next

Qwen: Qwen3 Coder Next is a budget text-first model from qwen with a fast runtime profile, large context posture, and the clearest fit around long-context research / agent workflows.

Best for: Long-context research / Agent workflowsFast latencyLarge contextBudget pricing
Intelligence
28.3

Benchmark blend

Coding
22.9

Dev workflow signal

Context
262K Tokens

Large

Input Price
$0.20

Budget tier

Decision snapshot
58

Qwen: Qwen3 Coder Next currently reads as a budget text-first option with large context and a fast runtime profile.

Overall profile
Selective fit
Best for
Long-context research / Agent workflows
Latency tier
Fast
Price tier
Budget
Source coverage
OpenRouterArtificial Analysis

Decision Strip

Decision rail before the raw tables

Core buy-side signals stay in one pass. The rest of the page expands only after intelligence, speed, context, and price are clear.

Intelligence
28.3
28

General reasoning and benchmark headroom.

Limited
Speed
153 tok/s
94

TTFT 0.80s

Above average
Context
262K Tokens
88

How much prompt and task state can stay in view.

Above average
Price
$0.20
86

$1.20 output / 1M

Efficient

Editorial Profile

Qwen: Qwen3 Coder Next in one narrative

Positioning, tradeoffs, and fit are consolidated into one read instead of repeating the same story across separate cards.

Selective fitCoding score 23Math score 36

Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per token, delivering performance comparable to models with 10 to 20x higher active compute, which makes it well suited for cost-sensitive, always-on agent deployment. The model is trained with a strong agentic focus and performs reliably on long-horizon coding tasks, complex tool usage, and recovery from execution failures. With a native 256k context window, it integrates cleanly into real-world CLI and IDE environments and adapts well to common agent scaffolds used by modern coding tools. The model operates exclusively in non-thinking mode and does not emit <think> blocks, simplifying integration for production coding agents.

Identity

qwen text-first profile

Positioning

Long-context research / Agent workflows with large context and fast runtime.

Cost posture

Efficient spend profile. More comfortable for sustained prompt volume if the capability fit is right.

Strengths
  • Large context headroom supports repo-wide prompts and long research sessions.

  • Latency and throughput look responsive enough for interactive loops.

Tradeoffs
  • Budget-friendly input pricing is a strength, but raw capability may vary by workload.

  • Current metadata points to a text-first profile rather than a broad multimodal one.

Best fit
  • Long-context summarization, repo analysis, and policy or document review.

Compare Next

Similar profiles worth opening next

qwen

Qwen: Qwen3.5 397B A17B

Intelligence
45.0
Context
262K Tokens
Input Price
$0.60
qwen

Qwen: Qwen3.5-122B-A10B

Intelligence
41.6
Context
262K Tokens
Input Price
$0.40
qwen

Qwen: Qwen3 Max Thinking

Intelligence
39.9
Context
262K Tokens
Input Price
$1.20

Benchmarks

Grouped by job-to-be-done

Only benchmark categories with actual signal are shown. Secondary values stay as simple definitions instead of nested micro-cards.

General intelligence

Broad reasoning, knowledge depth, and flagship benchmark posture.

Intelligence Index
28.3
GPQA
73.7%
HLE
9.3%
Coding

Software implementation, debugging quality, and coding benchmark signal.

Coding Index
22.9
SciCode
32.3%
Agent / tool use

Long-horizon execution quality and interactive benchmark evidence.

IFBench
35.2%
TAU2
79.5%
TerminalBench Hard
18.2%
LCR
40.0%

Specs & Pricing

Technical snapshot and cost posture

Specs stay neutral, pricing gets emphasis through values rather than extra containers. Raw provider internals remain in metadata at the end.

Technical snapshot
Context Window
262K Tokens
Vision
Text-first
Modalities
text->text, text
Tokenizer
Qwen
Max Completion
65536
Moderation
No
Supported Parameters
frequency_penaltylogit_biasmax_tokensmin_ppresence_penaltyrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_p
Input Modalities
text
Output Modalities
text
Price architecture
Input
per 1M input tokens
$0.20
Output
per 1M output tokens
$1.20
Blended
AA 3:1 mix
$0.53

This model is relatively efficient on price. It is the easier fit when sustained prompt volume matters.

OR Cache Read
$0.00

Metadata

Raw source tables at the end

Verification details remain available, but the page no longer forces them ahead of the editorial read.