I agree that all is not lost wrt sparsity and if SPH turns out to be true it might help us disentangle the superimposed features to better understand what is going on. You could think of constructing an “expanded” view of a neural network. The expanded view would allocate one neuron per feature and thus has sparse activations for any given data point and would be easier to reason about. That seems impractical in reality, since the cost of constructing this view might in theory be exponential, as there are exponentially many “almost orthogonal” vectors for a given vector space dimension, as a function of the dimension.
I think my original comment was meant more as a caution against the specific approach of “find an interpretable basis in activation space”, since that might be futile, rather than a caution against all attempts at finding a sparse representation of the computations that are happining within the network.
I agree that all is not lost wrt sparsity and if SPH turns out to be true it might help us disentangle the superimposed features to better understand what is going on. You could think of constructing an “expanded” view of a neural network. The expanded view would allocate one neuron per feature and thus has sparse activations for any given data point and would be easier to reason about. That seems impractical in reality, since the cost of constructing this view might in theory be exponential, as there are exponentially many “almost orthogonal” vectors for a given vector space dimension, as a function of the dimension.
I think my original comment was meant more as a caution against the specific approach of “find an interpretable basis in activation space”, since that might be futile, rather than a caution against all attempts at finding a sparse representation of the computations that are happining within the network.