Empirically, it works better than all the Ln norms for getting me zeros. Theoretically, it really likes zeros, whereas lots of other norms just like low numbers which are different things when talking about sparsity. I want zeros. I don’t just want low numbers.
What’s the motivation for that specific sparsity prior/regularizer? Seems interestingly different than standard Ln.
Empirically, it works better than all the Ln norms for getting me zeros. Theoretically, it really likes zeros, whereas lots of other norms just like low numbers which are different things when talking about sparsity. I want zeros. I don’t just want low numbers.