Thoughts on Formalizing Composition

NB: I’m surprised I couldn’t find anything on this, but I also didn’t spend very long looking. If there is a result from linear algebra that solves this problem then I’d love to hear it. Purpose of the post: getting feedback + ability to reference this in the future.

I’m most interested in these forms of feedback:

  • Meta—Is this a useful direction to think about?

  • Related Work—Does there already exist some work on this?

  • Math—Did I miss an important math fact or make a reasoning error?

  • Clarity of Writing—Which parts were hard to understand?

While not strictly necessary as a prerequisite, I’d recommend reading at least the introduction to transformer circuits.

Intro

When investigating attention-only transformer models, a natural question to ask is “how much do attention heads in different layers ‘communicate’ with each other via the residual stream?” If there were no communication at all, then an layer transformer with heads per layer would simply be equivalent to a single layer transformer with heads, which seems empirically not to be the case.[1] (By equivalent I mean something like “learns as efficiently and effectively”.) So there has to be some communication between layers.

Specifically, we are usually interested in how the output of a head in a previous layer affects the keys, queries, and values of a head in a later layer. This is called Q-, K-, and V-composition respectively.

In this post, I want to play around with how to formalize and measure composition. It’s not going to be super novel work, but I haven’t seen a write-up of this so far.

A first attempt

My first attempt at formalizing composition turns out to be not fine-grained enough for real systems, but I think it was still useful in forming my thoughts about this topic.

A descriptive definition

For the rest of the post, let and we are interested in the product and how strongly and compose via this product. For instance, could be the query-key matrix of a head in layer 2 and the output-value matrix of a head in layer 1.

reads from a subspace and writes to a subspace of . If rank(A) = n, then A writes to the full space, similarly if rank(A) = m, then A reads from the full space. reads from a subspace of and writes to a subspace of .[2] If the read-subspace of overlaps with write subspace of , then they can ‘communicate’ with each other via this overlap, since information passes through to .

An axiomatic definition

Let’s suppose we have a measure for composition, called that maps two matrices of fitting shapes to the composition score

Axiom 1

Why does this statement make sense? It says that the composition score should be 1, if and only if the only vector that is output by that gets mapped to 0 by is the zero vector itself, i.e. does something non-trivial with all vectors that it can potentially get from .

Axiom 2

The reasoning is very similar to Axiom 1.

Axiom 3

should vary ‘smoothly’ and ‘monotonically’ in some sense, i.e. if there is ‘more’ overlap then should be larger and vice versa.

One possible way to formalize this in the idealized case is to use [3]

This formula clearly satisfies axioms 1 and 2 and at least is monotonically increasing in the overlap. The main issue it has is that it is not continuous (ergo not differentiable) and not fine-grained enough to be applicable to real-world systems.

Composition in the real world

Unfortunately, in the real world, we are working with matrices that are learned by some messy process like SGD and represented by finite-precision floating-point numbers.

As a corollary, it is very likely to happen that [4]. This means that Axiom 1 would imply a composition score of 1, regardless of the matrices, as long as they have full rank. However, we would still like to be able to differentiate between grades of composition in this real-world case.

This suggests that our three axioms as stated above are not the right tool to think about composition in real systems.

Composition via SVD

For a different approach, let’s start by decomposing into their respective singular value decompositions and likewise for .
are square, real and orthogonal. is diagonal (but not necessarily square, since it is (or reversed) ). The diagonal entries of are non-negative and are called the singular values of the matrix. The great thing about the SVD is that every matrix has one, unlike other decompositions, like the eigenvalue or Cholesky decomposition.

The SVD allows for a different perspective of a matrix. In the language of the SVD, the matrix ‘reads’ from the input by ‘measuring’ it (dot-product-ing) with the right singular vectors (), weighting each dot product with the respective singular value and then using the results as the weights of the linear combinations of the left singular vectors ().[5]


Side note on notation:

You can make this even more explicit by writing the matrix in bra-ket notation or tensor product notation:

(This statement feels somehow more elegant than the original SVD formulation)

We could now write any vector in the basis of the and thus simplify the notation in the matrix-vector multiplication. Note that the are the coordinates of in the basis of the , which is in general not the canonical basis that we are used to.


If we now write the product in this decomposition

We can see that the interaction magic happens in the middle four terms on the right hand side.
We can write the elements of this middle part as

Where is the angle between vectors and . That means that the middle part is the matrix of singular value products, weighted by the similarity of the corresponding singular vectors. We can also write the whole product as follows, showcasing why only the middle part is relevant for measuring composition.

When would we say that B and A compose strongly?

Initially I thought that we would like and to contain the same set of vectors, i.e. that and should use the same basis. However, that doesn’t quite work, since for any singular value with multiplicity > 1, there are infinitely many orthonormal bases that you could choose. A better formulation could be:

There is a one-to-one correspondence of left SV-spaces of to right SV-spaces of , where an SV-space is the subspace spanned by all left/​right singular vectors which correspond to the same singular value. Furthermore, for all left SV-spaces of .

In particular, if all singular values have multiplicity 1, then this reduces to and containing the same set of basis vectors.

Since they each form an orthogonal basis of , this means the SV-spaces are either isomorphic or orthogonal. This means that, following our derivation from above, the ‘interaction’ part would have a block diagonal structure (in the special case of multiplicity = 1 for all SVs, we’d get a diagonal structure). Also, each block is symmetric, as the vectors share the same SV and the dot product is symmetric.

Any metric that we come up with should be invariant to the basis we choose for any particular SV-space (or block).

Enter Candidate 1: Frobenius norm

Anthropic proposes to use the following composition measure

where denotes the Frobenius norm.

Note that , because the Frobenius norm is submultiplicative, i.e. .

The Frobenius norm can be characterized either via the sum of squares of matrix elements or as the sum of squares of the singular values.

Let’s use our derivation above to write out the Frobenius norm of . First note that the Frobenius norm is invariant under rotations, which means that

When is this composition strong?

In the special case of identical SV-spaces, we can use our knowledge about the block diagonal structure of this interaction part by letting be the -th block, resulting in

In the even more special case where all singular values have multiplicity 1, we get

Remember this, as we will get back to it later!

When is this composition weak?

Assuming a multiplicity of 1 (which is very likely in the finite-precision case), we get weak composition intuitively when directions which get disproportionally magnified by get disproportionally squashed by and vice versa (meaning that singular value pairs with high singular vector similarity have very different relative magnitudes).

A tighter bound on the denominator

Nick Turner and Jack Strand used that there is a tighter bound on the sum of powers of the singular values of (screenshot from Topics in Matrix Analysis). Setting p = 2 gives a bound on the Frobenius norm that is tighter than the original submultiplicativity bound.

The only downside is that it is more expensive to compute, since we actually have to compute the singular values of and rather than summing the squares of their entries.

ETA: Note however, that it is more computationally efficient to compute via the SVDs of , in which case the tighter bound is not more costly to compute.

Putting this together they propose

Where the element-wise product is taken only over the first elements (otherwise it would not be defined)[6].

Coming back to our analysis of the strong composition case, we see that this bound is realized in the case of multiplicity 1 and identical bases!

But wait, there is one more assumption we need. In our previous derivation, we simply re-ordered the bases at will to make them match, but to realize this bound not only do the matrices need to have the same bases but also the order of the singular values needs to be the same!

This makes a lot of sense from the POV of wanting to measure compositionality! It’s maximal if both matrices agree on a basis and they also agree on the order of importance of these directions.

Are there other ways to realize this bound?

The trivial way is to set . However, in the real world we will usually not have multiplicity > 1 for non-zero singular values due to the finite precision issue. If we also assume full rank for then there is no singular value = 0.

Note that any distinct singular value > 0 has a unique singular vector, meaning that in this case both sets of singular vectors of will be unique.

At this point I don’t know how to show that in this case the bound can only be realized in the case of matching SV spaces. Suggestions welcome! As a reminder here’s the formula:

Other options

Determinant

You might think that the determinant could be a useful tool since it also measure how volume is distorted by a transformation, similarly to the Frobenius norm which measures the distortion along each singular vector. Unfortunately the determinant is only defined for square matrices and since usually , and , the determinant of the matrix will usually be 0.

Information Theory

I’m currently thinking about a way to formalize composition via information theoretic approaches and might post a follow-up at some point.

Thanks to Leon Lang and Neel Nanda for helpful comments and suggestions for improvements! Thanks to Nick Turner for discussions on the tighter denominator bound.

  1. ^

    Note that this argument is not valid for ‘normal’ transformers which include MLPs, since the MLPs can move information between dimensions of the residual stream.

  2. ^

    In real transformers, and thus reading means projecting from a large space into a small space and writing means projecting from a small space to a large space.

  3. ^

    Thanks to Leon Lang for suggesting this.

  4. ^

    I’d like to see a formal analysis of this, e.g. by looking at the matrices in GPT-2. Thanks to Neel Nanda for suggesting this. ETA: There is some preliminary work by Nick Turner and Jack Strand that verifies this for GPT-Neo and GPT-J.

  5. ^

    More rigorously, is in the basis given by the columns of (they form a basis of the full space since they are orthogonal and are n /​ m vectors respectively.) then re-scales in this new basis and projects the result into the output space.

  6. ^

    If you want try to prove that this bound is actually tighter. It’s pretty straightforward.

No comments.