If I’m understanding right, the MVG thing strikes me as not set up in a way that makes sense to me.
Let’s imagine a tiny micro-organism that lives in the soil. It has a short life-cycle, let’s say 1 week. So sometimes it’s born in the winter, and sometimes in the summer, and these different seasons call for different behaviors in various ways. In this situation…
The thing that I would NOT expect is that every 26 generations, Evolution changes its genes to be more summer-adapted, and then after another 26 generations, Evolution changes its genes to be more winter-adapted, etc.
The thing that I WOULD expect is that we wind up with (1) a genome that stays the same through seasons, (2) the genome encodes for (among other things) a “season sensor” that can trigger appropriate downstream behaviors.
That’s an example concerning evolution-as-a-learning-algorithm. If you prefer the within-lifetime learning algorithm, I have an example for that too. Let’s just replace the micro-organism with a human:
The thing that I would NOT expect is that every 26 weeks, the human learns “the weather outside is and always will be cold”. Then 26 weeks later they learn “Oops, I was wrong, actually the weather outside is and always will be hot”.
The thing that I WOULD expect is that the human learns a general, permanent piece of knowledge that sometimes it’s summer and sometimes it’s winter, and what to do in each case. AND, they learn to pick up on more specific cues that indicate whether it’s summer or winter at this particular moment. (For example, if it was winter yesterday, it’s probably also winter today.)
Anyway, if I understand your MVG experiment, it’s the first bullet point, not the second. If so, I wouldn’t have any strong expectation that it should work at all, notwithstanding the paper. Anyway, I would suggest trying to get to the second bullet point.
Let’s imagine a tiny micro-organism that lives in the soil. It has a short life-cycle, let’s say 1 week. So sometimes it’s born in the winter, and sometimes in the summer, and these different seasons call for different behaviors in various ways. In this situation…
I want to say, that bacteria:
have shorter lifecycles than that (like, less than a day)
and yet still have circadian rhythms, surprisingly
Prokaryotes were long thought to be incapable of expressing circadian (daily) rhythms. Research on nitrogen-fixing cyanobacteria in the 1980s squashed that dogma and showed that these bacteria could fulfill the criteria for circadian rhythmicity. Development of a luminescence reporter strain of Synechococcus elongatus PCC 7942 established a model system that ultimately led to the best characterized circadian clockwork at a molecular level. The conclusion of this chapter lists references to the seminal discoveries that have come from the study of cyanobacterial circadian clocks.
the genome encodes for (among other things) a “season sensor” that can trigger appropriate downstream behaviors.
So you can look into this and check for that. I’d expect a clock, which would switch things on and off. But, I don’t know how/if, say cyanobacteria do handle seasons. I’d first check circadian rhythms because that seems easier. (I want to say that day/night difference is stronger than season (and occurs everywhere) but it might depend on your location. Clearly stuff like polar extremes with month long ‘days’/‘nights’ might work differently. And the handling of day/night having to change around that being different, does seem like more of a reason for a sensor approach, though it’s not clear how much of a benefit that would add. I’d guess it’s still location dependent.)
In the human learning case, what the human is picking up on here is that there is a distinct thing called temperature, which can be different and that matters, a lot. There is now a temperature module/abstraction where there wasn’t one before. That’s the learning step MVG is hinting at, I think.
If you have genes that specialise to express or not express depending on initial conditions, you get a dynamic nigh identical to this one. Two loss functions you need to “do well” on, with a lot of shared tasks, except for a single submodule that needs changing depending on external circumstance. This gets you two gene activity patterns, with a lot of shared gene activity states, like the shared parameter values between the designs N_1 and N_2 here. The work of “fine tuning” the model to L_1, L_2 is then essentially “already done”, and accessed by setting the initial conditions right, instead of needing to be redone by “evolution” after each change, as in the simulation in this article. But it very much seemed like the same dynamic to me.
If I’m understanding right, the MVG thing strikes me as not set up in a way that makes sense to me.
Let’s imagine a tiny micro-organism that lives in the soil. It has a short life-cycle, let’s say 1 week. So sometimes it’s born in the winter, and sometimes in the summer, and these different seasons call for different behaviors in various ways. In this situation…
The thing that I would NOT expect is that every 26 generations, Evolution changes its genes to be more summer-adapted, and then after another 26 generations, Evolution changes its genes to be more winter-adapted, etc.
The thing that I WOULD expect is that we wind up with (1) a genome that stays the same through seasons, (2) the genome encodes for (among other things) a “season sensor” that can trigger appropriate downstream behaviors.
That’s an example concerning evolution-as-a-learning-algorithm. If you prefer the within-lifetime learning algorithm, I have an example for that too. Let’s just replace the micro-organism with a human:
The thing that I would NOT expect is that every 26 weeks, the human learns “the weather outside is and always will be cold”. Then 26 weeks later they learn “Oops, I was wrong, actually the weather outside is and always will be hot”.
The thing that I WOULD expect is that the human learns a general, permanent piece of knowledge that sometimes it’s summer and sometimes it’s winter, and what to do in each case. AND, they learn to pick up on more specific cues that indicate whether it’s summer or winter at this particular moment. (For example, if it was winter yesterday, it’s probably also winter today.)
Anyway, if I understand your MVG experiment, it’s the first bullet point, not the second. If so, I wouldn’t have any strong expectation that it should work at all, notwithstanding the paper. Anyway, I would suggest trying to get to the second bullet point.
Sorry if I’m misunderstanding.
I want to say, that bacteria:
have shorter lifecycles than that (like, less than a day)
and yet still have circadian rhythms, surprisingly
Searching ‘bacteria colony circadian rhythm’ turned up:
https://link.springer.com/chapter/10.1007/978-3-030-72158-9_1
abstract:
Okay, how long is the lifecycle of Cyanobacteria?
Searching ‘cyanobacteria lifespan’:
https://www.researchgate.net/post/What_is_the_average_life_span_of_Cyanobacteria
6-12 hours (depending on temperature).
So you can look into this and check for that. I’d expect a clock, which would switch things on and off. But, I don’t know how/if, say cyanobacteria do handle seasons. I’d first check circadian rhythms because that seems easier. (I want to say that day/night difference is stronger than season (and occurs everywhere) but it might depend on your location. Clearly stuff like polar extremes with month long ‘days’/‘nights’ might work differently. And the handling of day/night having to change around that being different, does seem like more of a reason for a sensor approach, though it’s not clear how much of a benefit that would add. I’d guess it’s still location dependent.)
In the human learning case, what the human is picking up on here is that there is a distinct thing called temperature, which can be different and that matters, a lot. There is now a temperature module/abstraction where there wasn’t one before. That’s the learning step MVG is hinting at, I think.
Regarding the microorganism, the example situation you give is not directly covered by MVG as described here, but see the section “Specialisation drives the evolution of modularity” in the literature review, basically: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000719
If you have genes that specialise to express or not express depending on initial conditions, you get a dynamic nigh identical to this one. Two loss functions you need to “do well” on, with a lot of shared tasks, except for a single submodule that needs changing depending on external circumstance. This gets you two gene activity patterns, with a lot of shared gene activity states, like the shared parameter values between the designs N_1 and N_2 here. The work of “fine tuning” the model to L_1, L_2 is then essentially “already done”, and accessed by setting the initial conditions right, instead of needing to be redone by “evolution” after each change, as in the simulation in this article. But it very much seemed like the same dynamic to me.