In the human learning case, what the human is picking up on here is that there is a distinct thing called temperature, which can be different and that matters, a lot. There is now a temperature module/abstraction where there wasn’t one before. That’s the learning step MVG is hinting at, I think.
If you have genes that specialise to express or not express depending on initial conditions, you get a dynamic nigh identical to this one. Two loss functions you need to “do well” on, with a lot of shared tasks, except for a single submodule that needs changing depending on external circumstance. This gets you two gene activity patterns, with a lot of shared gene activity states, like the shared parameter values between the designs N_1 and N_2 here. The work of “fine tuning” the model to L_1, L_2 is then essentially “already done”, and accessed by setting the initial conditions right, instead of needing to be redone by “evolution” after each change, as in the simulation in this article. But it very much seemed like the same dynamic to me.
In the human learning case, what the human is picking up on here is that there is a distinct thing called temperature, which can be different and that matters, a lot. There is now a temperature module/abstraction where there wasn’t one before. That’s the learning step MVG is hinting at, I think.
Regarding the microorganism, the example situation you give is not directly covered by MVG as described here, but see the section “Specialisation drives the evolution of modularity” in the literature review, basically: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000719
If you have genes that specialise to express or not express depending on initial conditions, you get a dynamic nigh identical to this one. Two loss functions you need to “do well” on, with a lot of shared tasks, except for a single submodule that needs changing depending on external circumstance. This gets you two gene activity patterns, with a lot of shared gene activity states, like the shared parameter values between the designs N_1 and N_2 here. The work of “fine tuning” the model to L_1, L_2 is then essentially “already done”, and accessed by setting the initial conditions right, instead of needing to be redone by “evolution” after each change, as in the simulation in this article. But it very much seemed like the same dynamic to me.