Prediction: the SAE results may be better for ‘wider’, but only if you control for something else, possibly perplexity, or regularize more heavily. The literature on wider vs deeper NNs has historically shown a stylized fact of wider NNs tending to ‘memorize more, generalize less’ (which you can interpret as the finegrained features being used mostly to memorizing individual datapoints, perhaps exploiting dataset biases or nonrobust features) and so deeper NNs are better (if you can optimize them effectively without exploding/collapsing gradients), which would potentially more than offset any orthogonality gains from the greater width. Thus, you would either need to regularize more heavily (‘over-regularizing’ from the POV of the wide net, because it would achieve a better performance if it could memorize more of the long tail, the way it ‘wants’ to) or otherwise adjust for performance (to disentangle the performance benefits of wideness from the distorting effect of achieving that via more memorization).
Prediction: the SAE results may be better for ‘wider’, but only if you control for something else, possibly perplexity, or regularize more heavily. The literature on wider vs deeper NNs has historically shown a stylized fact of wider NNs tending to ‘memorize more, generalize less’ (which you can interpret as the finegrained features being used mostly to memorizing individual datapoints, perhaps exploiting dataset biases or nonrobust features) and so deeper NNs are better (if you can optimize them effectively without exploding/collapsing gradients), which would potentially more than offset any orthogonality gains from the greater width. Thus, you would either need to regularize more heavily (‘over-regularizing’ from the POV of the wide net, because it would achieve a better performance if it could memorize more of the long tail, the way it ‘wants’ to) or otherwise adjust for performance (to disentangle the performance benefits of wideness from the distorting effect of achieving that via more memorization).