ProLU: A Nonlinearity for Sparse Autoencoders

Abstract

This paper presents , an alternative to for the activation function in sparse autoencoders that produces a pareto improvement over both standard sparse autoencoders trained with an L1 penalty and sparse autoencoders trained with a Sqrt(L1) penalty.

The gradient wrt. is zero, so we generate two candidate classes of differentiable :

PyTorch Implementation

Introduction

SAE Context and Terminology

Learnable parameters of a sparse autoencoder:

  • : encoder weights

  • : decoder weights

  • : encoder bias

  • : decoder bias

The output of an SAE is given by

Training

Notation: Encoder/​Decoder

Let

so that the full computation done by an SAE can be expressed as

An SAE is trained with gradient descent on

where is the sparsity penalty coefficient (often “L1 coefficient”) and is the sparsity penalty function, used to encourage sparsity.

is commonly the L1 norm but recently has been shown to produce a Pareto improvement on the L0 and CE metrics. We will use this as a further baseline to compare against when assessing our models in addition to the standard -based SAE with L1 penalty.

Motivation: Inconsistent Scaling in Sparse Autoencoders

Due to the affine translation, sparse autoencoder features with nonzero encoder biases only perfectly reconstruct feature magnitudes at a single point.

image

This poses difficulties if activation magnitudes for a fixed feature tend to vary over a wide range. This potential problem motivates the concept of scale consistency:

A scale consistent response curve

image-1

The bias maintains its role in noise suppression, but no longer translates activation magnitudes when the feature is active.

The lack of gradients for the encoder bias term poses a challenge for learning with gradient descent. This paper will formalize an activation function which gives SAEs this scale-consistent response curve, and motivate and propose two plausible synthetic gradients, and compare scale-consistent models trained with the two synthetic gradients to standard SAEs and SAEs trained with Sqrt(L1) penalty.

Scale Consistency Desiderata

Notation: Centered Submodule

The use of the decoder bias can be viewed as performing centering on the inputs to a centered SAE then reversing the centering on the outputs:

Notation: Specified Feature

Let denote the weights and the encoder bias for the -th feature. Then, let

Conditional Linearity

Noise Suppresion Threshold

Proportional ReLU

We define the Proportional ReLU as:

Backprop with :

To use in SGD-optimized models, we first address the lack of gradients wrt. the term.

gradients:

For comparison and later use, we will first consider : partial derivatives are well defined for at all points other than :

Gradients of :

Partials of wrt. are similarly well defined:

However, they are not well defined wrt. , so we must synthesize these.

Methods

Notation: Synthetic Gradients

Let denote the synthetic partial derivative of wrt. , and the synthetic gradient of f, used for backpropagation as a stand-in for the gradient.

Different synthetic gradient types

We train two classes of with different synthetic gradients. These are distinguished by their subscript:

They are identical in output, but have different synthetic gradients. I.e.

Defining : -like gradients

The first synthetic gradient is very similar to the gradient for . We retain the gradient wrt. , and define the synthetic gradient wrt. to be the same as the gradient wrt. :

Defining : Derivation from straight-through estimator

The second class of uses synthetic gradients for both and and can be motivated by framing and in terms of the threshold function, and a common choice of straight-through estimator (STE) for the threshold function. This is a plausible explanation for the observed empirical performance but it should be noted that there are many degrees of freedom and possible alternative

Setup

The threshold function is defined as follows:

We will rephrase the partial derivative of in terms of the threshold function for ease of later notation:

It is common to use a straight-through estimator (STE) to approximate the gradient of the threshold function:

We can reframe in terms of the threshold function:

Synthetic Gradients wrt.

Now, we take partial derivatives of wrt. using the STE approximation for the threshold function:

Synthetic Gradients wrt.

Choice of Straight-Through Estimator

There are many possible functions to use for . In our experiments, we take the derivative of as the choice of straight-through estimator. This choice has been used in training quantized neural nets.

then, synthetic gradients wrt. are given by,

and wrt. are given by,

Sparse Autoencoder

We can express the encoder of a SAE as

No change is needed to the decoder. Thus,

Experiment Setup

Shared among all sweeps:

  • Adam optimizer, with:

  • Data

    • Trained on gpt2 layer 6 pre-residual activations

    • Tokens: ~400m tokens from The Pile @ack(Alan Cooney’s pre-tokenized pile)

    • -> ~100k gradient steps

  • LR schedule

  • Anthropic resampling

    • I used 3e-6 as the dead threshold rather than 0

    • Resample at 25,000 and 50,000 steps

    • The proportion of the average encoder norm resampled to varied between sweeps

  • Normalization:

  • SAE details

    • Dictionary expansion factor of 16

    • Tied decoder bias, untied encoder/​decoder weights

Varying between sweeps:

  • Experiment 1:

    • 30 total runs

    • Resampled to 0.02 of avg encoder norm

  • Experiment 2:

    • 48 total runs

    • Resampled to 0.02 of avg encoder norm

  • Experiment 3:

    • 30 total runs

    • Resampled to 0.2 of avg encoder norm

    • adjusted L1-coefficient ranges for each model, to get more overlap in L0 ranges.

      • different architectures respond very differently to l1 coefficients

Varying within sweeps

  • L1 coefficient

  • Architecture choice of nonlinearity:

  • L1 Penalty type

    • L1:

    • Sqrt(L1):

Results

Let:

  • be the CE loss of the model unperturbed on the data distribution

  • be the CE loss of the model when activations are replaced with the reconstructed activations

  • be the CE loss of the model when activations are replaced with the zero vector

Degradation: or Information Lost. This measures how much information about the correct next token the model loses by having its activations replaced with the SAE’s reconstruction .

For :

  • The pareto-best architecture uses the nonlinearity with an L1 sparsity penalty.

For :

  • There are no occurrences of models using with L1 penalty in this L0 range

  • Of the remaining models, with Sqrt(L1) penalty is pareto-best.

Further Investigation

MSE/​L1 Pareto Frontier

The gradients of are not the gradients of the loss landscape, so it would be a reasonable default to expect these models to perform worse than a vanilla SAE. Indeed I expect they may perform worse on the optimization target, and that the reason why this is able to work is there is slack in the problem introduced by us being unable to optimize for our actual target directly—our current options are to optimize for L1 or Sqrt(L1) as sparsity proxies for what we actually want because L0 is not a differentiable metric.

Actual target: minimize L0 and bits lost
Optimization (proxy) target: minimize L1 (or )) and MSE

Because we’re not optimizing for the actual target, I am not so surprised that there may be weird tricks we can do to get more of what we want.

On this vein of thought, my prediction after seeing the good performance on the actual target (and prior to checking this prediction) was:

Despite improved performance on degradation/​L0 ProLU SAEs will have the same or worse on the MSE/​L1 curve.
We may also see the higher performing architectures have greater L1/​L0

Let’s check:

In favor of the hypothesis, while other architectures sometimes join it on the frontier, the Vanilla ReLU is present for the entirety of this Pareto frontier. On the other hand, at lower sparsity levels joins it at the frontier. So the part where this change does not improve performance on the optimization target seems true, but it’s not clear that better performance on the actual target is coming from worse performance on the optimization target.

This suggests a possible reason for why the technique works well:
Possibly the gains from this technique do not come from scale consistency so much as that it forced us to synthesize some gradients and those gradients happened to point more in the direction of what we actually want.

Here is the graph of L1 norm versus L0 norm:

This looks like it’s possible that what is working well here is the learned features are experiencing less suppression, but that may not be the only thing going on fixing this. Feature suppression is still consistent with the scale consistency hypothesis, as consistent undershooting would be an expected side effect if that is a real problem, since regular SAEs may be less able to filter unwanted activations if they are keeping biases near zero in order to minimize errors induced by scale inconsistency.

More investigation is needed here to create a complete or confident picture of what is cause of the performance gains in SAEs.

Unfortunately, I did not log so I can’t compare with that curve, but could load the models to create those graphs in follow-up work.

Acknowledgements

Noa Nabeshima and Arunim Agarwal gave useful feedback and editing help on the draft of this post.

Mason Krug for in depth editing of my grant proposal, which helped seed this writeup and clarify my communication.

How to Cite

@misc{ProLUNonlinearity,
   title = {ProLU: A Nonlinearity for Sparse Autoencoders},
   author = {Glen M. Taggart},
   year = {2024},
   howpublished = {\url{https://​​www.alignmentforum.org/​​posts/​​HEpufTdakGTTKgoYF/​​prolu-a-nonlinearity-for-sparse-autoencoders}},
}