For what it’s worth (perhaps nothing) in private experiments I’ve seen that in certain toy (transformer) models, task B performance gets wiped out almost immediately when you stop training on it, in situations where the two tasks are related in some way.
I haven’t looked at how deep the erasure is, and whether it is far easier to revive than it was to train it in the first place.
Infertility rates are rising and nobody seems to quite know why. Below is what feels like a possible (trivial) explanation that I haven’t seen mentioned anywhere.
I’m not in this field personally so it’s possible this theory is out there, but asking GPT about it doesn’t yield the proposed explanation: https://chat.openai.com/share/ab4138f6-978c-445a-9228-674ffa5584ea
Toy model:
a family is either fertile or infertile, and fertility is hereditary
the modal fertile family can have up to 10 kids, the modal infertile family can only have 2 kids
in the olden days families aimed to have as many kids as they could
now families aim to have 2 kids each
Under this model, in the olden days we would find a high proportion of fertile people in the gene pool, but in the modern world we wouldn’t. Put differently, the old convention lead to a strong positive correlation between fertility and participation in the gene pool, and the new convention leads to 0 correlation. This removes the selective pressure on fertility, hence we should expect fertility to drop / infertility to rise.
Empirical evidence for this would be something like an analysis of the time series of family size variance and infertility—is lower variance followed by increased infertility?