According to Stefan’s experimental data, the Frobenius norm of a matrix W is equivalent to the expectation value of the L2 vector norm of W⋅x for a random vector x (sampled from normal distribution and normalized to mean 0 and variance 1). So calculating the Frobenius norm seems equivalent to testing the behaviour on random inputs. Maybe this is a theorem?
There was one assumption in the StackExchange post I didn’t immediately get, that the variance of y=Ax is AΣAT. But I just realized the proof for that is rather short: Assuming Σ (the variance of x) is the identity then the left side is Var(yi)=Var(Aijxj)=∑jA2ijVar(xj)=∑jA2ij and the right side is (AAT)i=∑jAij(AT)ji=∑jA2ij so this works out. (The ∑ symbols are sums here.)
I found a proof of this theorem: https://math.stackexchange.com/questions/2530533/expected-value-of-square-of-euclidean-norm-of-a-gaussian-random-vector
Thanks for finding this!
There was one assumption in the StackExchange post I didn’t immediately get, that the variance of y=Ax is AΣAT. But I just realized the proof for that is rather short: Assuming Σ (the variance of x) is the identity then the left side is
Var(yi)=Var(Aijxj)=∑jA2ijVar(xj)=∑jA2ij
and the right side is
(AAT)i=∑jAij(AT)ji=∑jA2ij
so this works out. (The ∑ symbols are sums here.)