What Happens When You Try to Merge AR Reasoning Into Diffusion Models

Diffusion language models are the rising topic of discussion lately. Models like LLaDA, Dream 7B, Mercury, and SDAR are as good as autoregressive models on standard benchmarks while offering 2-4x faster inference through parallel token generation. For a deeper dive into how diffusion LMs work, see our previous post.

SDAR (Synergy of Diffusion and AutoRegression) is a block diffusion model based on the Qwen3 architecture. It creates tokens in blocks and uses iterative denoising to refine them rather than creating tokens one at a time.

The potential of allowing the model to “reason” before responding is demonstrated by models such as Qwen3-Thinking and DeepSeek-R1; this is nearly exclusively done with AR systems. Diffusion models are underexplored in terms of reasoning.

So we asked: can we transfer reasoning capabilities from AR models to diffusion models?

Prerequisites:

Part 1: The Hypothesis

Finding Shared Structure

SDAR and Qwen3-base share the same architecture and initial weights. Within AR model families, model merging strategies including task vectors, SLERP, and linear interpolation have proven effective.

Our approach was shaped by Nepal et al. (2025), which demonstrated that mathematical reasoning relies on a small number of specialised layers, and if these crucial layers are removed, the maths accuracy decreases by 80%, but factual recall hardly changes.

Zero Ablation: Finding Critical Layers

We ran zero ablation: zero out each layer’s weights, measure GSM8K accuracy.

ModelCritical Layers (lowest accuracy when zeroed)
SDAR1 (6.25%), 6 (6.25%)
Qwen-Thinking6 (6.25%), 23 (0%), 26 (0%)

Layer 6 stood out, as it was critical for both models. A shared bottleneck where both architectures route mathematical reasoning made it a good target for merging.

CKA Analysis: Where Models Diverge

We validated with CKA (Centered Kernel Alignment) between AR and diffusion activations:

Layer RangeCKA ScoreInterpretation
0-50.73-0.99Nearly identical
60.07Divergence point
7-150.10-0.21Low similarity
16-330.22-0.30Moderate
34-350.64-0.75Converging

Layer 6 shows a dramatic drop from 0.89 to 0.07. Both analyses pointed to Layer 6 as the divergence point. The hypothesis: merge Qwen’s Layer 6 into SDAR to transfer reasoning.

Part 2: What We Tried

Approach 1: Layer-6 Linear Merging

We first finetuned SDAR with LoRA for 300 steps on reasoning data to teach it to produce <think> tokens (required for extended reasoning). Then we targeted Layer 6:

ConfigurationLayer 6 Ratio (SDAR/​Qwen)
l6_merge_505050
l6_merge_707030
l6_merge_909010
l6_swap_1000100 (full replacement)

Approach 2: Full-Model SLERP

Standard SLERP merging across all layers at various ratios (90/​10, 7030, 5050).

Approach 3: Task Vectors

Extract the “reasoning delta” from AR models and apply it to diffusion:

We tried multiple configurations: basic task vectors, norm-preserving variants, MLP-only, various λ values (0.01 to 0.5).

Approach 4: Sophisticated Merging

TIES-Merging, DARE-TIES, DELLA. Techniques designed to handle conflicting weight updates.

Approach 5: Activation Surgery

Train bottleneck modules to transform AR activations into diffusion-compatible representations.

Part 3: Results

GSM8K Results

ModelGSM8K (n=1319)Notes
Qwen3-4B-Thinking~95%AR reasoning model
SDAR-4B baseline86-89%Diffusion baseline
SDAR-4B-FT (Fresh300)~88%LoRA finetuned on reasoning
Full SLERP 901087%= baseline
L6 merge 703079%< baseline
L6 merge 901061-80%< baseline
L6 swap 10058-80%< baseline
Task vectors (all configs)0%Model collapsed
TIES, DARE, DELLA0%Model collapsed

Compared to baseline SDAR, the L6 merges performed worse. Early small-sample tests (n=16) showed promising results (up to 100%), but full evaluation indicated that this was sample variance. The LoRA finetuned model (Fresh300) maintains baseline performance, showing native finetuning works.

AIME24 Results (Harder Benchmark)

ModelAIME24 Pass@8TokensNotes
SDAR-4B baseline20%32KDiffusion baseline
SDAR-4B-FT (Fresh300)20%32KLoRA finetuned
L6 swap 10020%8K= baseline
L6 merge 703023%8K2630 problems (timeout)
L6 merge 901017%8K< baseline
L6 merge 505013%8K< baseline
Full SLERP 901023%2KShort context

No merging configuration outperformed baseline SDAR on the more difficult AIME24 benchmark. The L6 merge 7030 achieved 23%, but this is within the noise of the 20% baseline and required 8K tokens of reasoning. All other configurations performed at or below baseline.

Task Vectors Don’t Transfer

Task vectors didn’t just fail. They destroyed the model:

MethodSample Output
DARE-TIES

<|endoftext|>

(immediate termination)

DELLA"eagerly eagerly eagerly murdered murdered..."
TIES"" (empty string)

Compare to baseline SDAR:

To solve this, I need to find how many apples Janet has in total.
Janet starts with 10 apples and buys 5 more.
10 + 5 = 15
The answer is \\boxed{15}

Part 4: The Geometry Behind This

Why We Analyzed the Weight Space

The results above tell us what doesn’t work, but not why. To understand that, we looked at the geometry of how these models learn.

We computed deltas between four models:

DeltaFormulaDescription
D1Thinking_AR—Base_ARAR finetuning direction
D2FT_Diff—Base_DiffDiffusion finetuning direction
D3Base_AR—Base_DiffMode difference

The question: do AR and diffusion learn reasoning in similar directions?

They Don’t

AR and diffusion finetuning directions are orthogonal. Geometrically perpendicular in weight space.

Think of it this way: if you want to go North (improve diffusion reasoning), but you push East (apply AR task vector), you make no progress.

Different Layers, Different Learning

AR and diffusion don’t just learn in different directions. They learn in different places:

AR Finetuning (D1): Middle layers 14-23, ~22% relative change Diffusion Finetuning (D2): Edge layers 1-10 and 32-33, ~3% relative change

Almost no overlap. And AR makes changes 7.3x larger than diffusion (mean norm 10.13 vs 1.39).

Why Linear Merging Doesn’t Help

A weighted average of two sets of weights is produced by linear merging. It dilutes both models rather than transferring capabilities.

This is evident from the Layer 6 results, as replacing SDAR’s Layer 6 with Qwen’s doesn’t improve reasoning. Layer 6 is critical for both models (zeroing it hurts), but they use it differently. Swapping just substitutes one incompatible implementation for another, rather than transferring the capability

Activation Surgery: A Different Direction

Weight-space merging fails because AR and diffusion learn in orthogonal subspaces.By working on representations rather than parameters, activation-space approaches completely avoid the issue.

We tried one approach: train bottleneck modules to make AR activations statistically similar to diffusion activations (measured by CKA). CKA improved by +0.11 on average. Task accuracy dropped to 0%.

This remains an open direction. Unlike weight-space merging, activation surgery doesn’t face the geometric orthogonality barrier.

Part 5: What We Learned

The Core Discovery

AR and diffusion models learn reasoning in orthogonal weight subspaces.

Since no alignment technique can overcome geometric orthogonality unless we begin to incorporate some rotation vectors, this is not a hyperparameter problem.

Three Insights

  1. You can’t just copy weights between AR and diffusion models. Because of the orthogonality, you require something beyond weight manipulation, such as architecture-level bridges or paradigm-agnostic modules.

  2. Similar-looking activations don’t mean similar capabilities. Our CKA surgery improved statistical similarity while destroying task performance. It is possible to match representations’ shapes without maintaining the information they contain.

  3. Same weights, same architecture, different computation. SDAR and Qwen start from an identical base model but end up routing information through different layers. How you generate (AR vs diffusion) shapes what the model learns.

What Actually Works

If you want better reasoning in diffusion models:

  1. LoRA finetuning on reasoning data: While comprehensive finetuning results in catastrophic forgetting, native subspace learning respects the way diffusion models represent information..

  2. Train from scratch with reasoning data. The orthogonality suggests that instead of transferring reasoning from autoregressive models, it must be learnt within the diffusion paradigm.

  3. Wait for scale. AR reasoning improved dramatically with scale and data. Diffusion models haven’t received the same investment yet.

Summary

ApproachResultWhy
Task vectors0%Orthogonal subspaces (cos=0.001)
TIES, DARE, DELLA0%Same geometric problem
Layer-6 merging≤ baselineCreates broken hybrid
Full SLERP≤ baselineDilutes both models
Activation surgery0% (CKA objective)Wrong objective, not wrong paradigm
LoRA finetuningWorksNative subspace learning

Weight-based merging doesn’t transfer AR reasoning into diffusion models. The same capability has fundamentally different implementations depending on the generation process. However, we now understand why native finetuning is still effective.

Open Questions

Are subspace alignment techniques useful? Methods such as Git Re-Basin use permutation to align weight spaces. Could they rotate AR’s reasoning subspace into diffusion’s?

Is the orthogonality fundamental or incidental? Would diffusion models that were trained using different data and aims exhibit greater AR alignment?

Architecture design for transferability: Could models be designed with paradigm-agnostic reasoning modules that allow for true capability portability?

References

No comments.