What Happens When You Try to Merge AR Reasoning Into Diffusion Models

Diffusion language models are gaining traction. Models like LLaDA, Dream 7B, Mercury, and SDAR match autoregressive models on standard benchmarks while offering 2-4x faster inference through parallel token generation. For a deeper dive into how diffusion LMs work, see our previous post.

SDAR (Synergy of Diffusion and AutoRegression) is a block diffusion model based on the Qwen3 architecture. Instead of generating one token at a time, it generates tokens in blocks and refines them through iterative denoising.

Models like Qwen3-Thinking and DeepSeek-R1 show what’s possible when we let the model “reason” before answering, but this is almost entirely done with AR architectures. Diffusion models are underexplored in terms of reasoning.

So we asked: can we transfer reasoning capabilities from AR models to diffusion models?

Prerequisites:

Part 1: The Hypothesis

Finding Shared Structure

SDAR shares its architecture and initial weights with Qwen3-base. Model merging techniques (linear interpolation, SLERP, task vectors) have shown success within AR model families.

A recent paper shaped our approach. Nepal et al. (2025) showed that mathematical reasoning depends on a small number of specialized layers. Remove these critical layers, and math accuracy drops by 80%, while factual recall barely changes.

If we could identify shared critical layers between AR and diffusion models, maybe we could target our transfer there.

Zero Ablation: Finding Critical Layers

We ran zero ablation: zero out each layer’s weights, measure GSM8K accuracy.

ModelCritical Layers (lowest accuracy when zeroed)
SDAR1 (6.25%), 6 (6.25%)
Qwen-Thinking6 (6.25%), 23 (0%), 26 (0%)

Layer 6 stood out. Critical for both models. A shared bottleneck where both architectures route math reasoning. This seemed like a good target for merging.

CKA Analysis: Where Models Diverge

We validated with CKA (Centered Kernel Alignment) between AR and diffusion activations:

Layer RangeCKA ScoreInterpretation
0-50.73-0.99Nearly identical
60.07Divergence point
7-150.10-0.21Low similarity
16-330.22-0.30Moderate
34-350.64-0.75Converging

Layer 6 shows a dramatic drop from 0.89 to 0.07. Both analyses pointed to Layer 6 as the divergence point. The hypothesis: merge Qwen’s Layer 6 into SDAR to transfer reasoning.

Part 2: What We Tried

Approach 1: Layer-6 Linear Merging

We first finetuned SDAR with LoRA for 300 steps on reasoning data to teach it to produce <think> tokens (required for extended reasoning). Then we targeted Layer 6:

ConfigurationLayer 6 Ratio (SDAR/​Qwen)
l6_merge_505050
l6_merge_707030
l6_merge_909010
l6_swap_1000100 (full replacement)

Approach 2: Full-Model SLERP

Standard SLERP merging across all layers at various ratios (90/​10, 7030, 5050).

Approach 3: Task Vectors

Extract the “reasoning delta” from AR models and apply it to diffusion:

We tried multiple configurations: basic task vectors, norm-preserving variants, MLP-only, various λ values (0.01 to 0.5).

Approach 4: Sophisticated Merging

TIES-Merging, DARE-TIES, DELLA. Techniques designed to handle conflicting weight updates.

Approach 5: Activation Surgery

Train bottleneck modules to transform AR activations into diffusion-compatible representations.

Part 3: Results

GSM8K Results

ModelGSM8K (n=1319)Notes
Qwen3-4B-Thinking~95%AR reasoning model
SDAR-4B baseline86-89%Diffusion baseline
SDAR-4B-FT (Fresh300)~88%LoRA finetuned on reasoning
Full SLERP 901087%= baseline
L6 merge 703079%< baseline
L6 merge 901061-80%< baseline
L6 swap 10058-80%< baseline
Task vectors (all configs)0%Model collapsed
TIES, DARE, DELLA0%Model collapsed

The L6 merges performed worse than baseline SDAR. Early small-sample tests (n=16) showed promising results (up to 100%), but full evaluation revealed this was sample variance. The LoRA finetuned model (Fresh300) maintains baseline performance, showing native finetuning works.

AIME24 Results (Harder Benchmark)

ModelAIME24 Pass@8TokensNotes
SDAR-4B baseline20%32KDiffusion baseline
SDAR-4B-FT (Fresh300)20%32KLoRA finetuned
L6 swap 10020%8K= baseline
L6 merge 703023%8K2630 problems (timeout)
L6 merge 901017%8K< baseline
L6 merge 505013%8K< baseline
Full SLERP 901023%2KShort context

On the harder AIME24 benchmark, no merge configuration beat baseline SDAR. The L6 merge 7030 achieved 23%, but this is within noise of the 20% baseline and required 8K tokens of reasoning. All other configurations performed at or below baseline.

Task Vectors Don’t Transfer

Task vectors didn’t just fail. They destroyed the model:

MethodSample Output
DARE-TIES

<|endoftext|>

(immediate termination)

DELLA"eagerly eagerly eagerly murdered murdered..."
TIES"" (empty string)

Compare to baseline SDAR:

To solve this, I need to find how many apples Janet has in total.
Janet starts with 10 apples and buys 5 more.
10 + 5 = 15
The answer is \\boxed{15}

Part 4: The Geometry Behind This

Why We Analyzed the Weight Space

The results above tell us what doesn’t work, but not why. To understand that, we looked at the geometry of how these models learn.

We computed deltas between four models:

DeltaFormulaDescription
D1Thinking_AR—Base_ARAR finetuning direction
D2FT_Diff—Base_DiffDiffusion finetuning direction
D3Base_AR—Base_DiffMode difference

The question: do AR and diffusion learn reasoning in similar directions?

They Don’t

AR and diffusion finetuning directions are orthogonal. Geometrically perpendicular in weight space.

Think of it this way: if you want to go North (improve diffusion reasoning), but you push East (apply AR task vector), you make no progress.

Different Layers, Different Learning

AR and diffusion don’t just learn in different directions. They learn in different places:

AR Finetuning (D1): Middle layers 14-23, ~22% relative change Diffusion Finetuning (D2): Edge layers 1-10 and 32-33, ~3% relative change

Almost no overlap. And AR makes changes 7.3x larger than diffusion (mean norm 10.13 vs 1.39).

Why Linear Merging Doesn’t Help

Linear merging creates a weighted average of two sets of weights. It doesn’t transfer capabilities; it dilutes both models.

The Layer 6 results show this clearly: replacing SDAR’s Layer 6 with Qwen’s doesn’t improve reasoning. Layer 6 is critical for both models (zeroing it hurts), but they use it differently. Swapping doesn’t transfer the capability. It just substitutes one implementation for another incompatible one.

Activation Surgery: A Different Direction

Weight-space merging fails because AR and diffusion learn in orthogonal subspaces. Activation-space techniques sidestep this entirely by operating on representations rather than parameters.

We tried one approach: train bottleneck modules to make AR activations statistically similar to diffusion activations (measured by CKA). CKA improved by +0.11 on average. Task accuracy dropped to 0%.

This remains an open direction. Unlike weight-space merging, activation surgery doesn’t face the geometric orthogonality barrier.

Part 5: What We Learned

The Core Discovery

AR and diffusion models learn reasoning in orthogonal weight subspaces.

This isn’t a hyperparameter problem as no alignment method can overcome geometric orthogonality unless we start to introduce some rotation vectors.

Three Insights

  1. You can’t just copy weights between AR and diffusion models. The orthogonality means you need something beyond weight manipulation, maybe architecture-level bridges or modules designed to be paradigm-agnostic.

  2. Similar-looking activations don’t mean similar capabilities. Our CKA surgery improved statistical similarity while destroying task performance. You can match the shape of representations without preserving what they encode.

  3. Same weights, same architecture, different computation. SDAR and Qwen start from identical base model but end up routing information through different layers. How you generate (AR vs diffusion) shapes what the model learns.

What Actually Works

If you want better reasoning in diffusion models:

  1. LoRA finetuning on reasoning data. Native subspace learning respects how diffusion models represent information. Full finetuning causes catastrophic forgetting.

  2. Train from scratch with reasoning data. The orthogonality suggests reasoning must be learned within the diffusion paradigm, not transferred from AR.

  3. Wait for scale. AR reasoning improved dramatically with scale and data. Diffusion models haven’t received the same investment yet.

Summary

ApproachResultWhy
Task vectors0%Orthogonal subspaces (cos=0.001)
TIES, DARE, DELLA0%Same geometric problem
Layer-6 merging≤ baselineCreates broken hybrid
Full SLERP≤ baselineDilutes both models
Activation surgery0% (CKA objective)Wrong objective, not wrong paradigm
LoRA finetuningWorksNative subspace learning

Weight-based merging doesn’t transfer AR reasoning into diffusion models. The same capability has fundamentally different implementations depending on the generation mechanism. But now we know why, and we know that native finetuning still works.

Open Questions

Can subspace alignment methods help? Techniques like Git Re-Basin align weight spaces through permutation. Could they rotate AR’s reasoning subspace into diffusion’s?

Is the orthogonality fundamental or incidental? Would diffusion models trained differently (different data, different objectives) show more alignment with AR?

Architecture design for transferability: Could models be designed with paradigm-agnostic reasoning modules that enable genuine capability portability?

References

No comments.