Okay, the “genetic knob” is maybe the right language. What I meant is that for evolution to be able to inner-align humans to IGF, you’d need
Humans occasionally wanting IGF, and using that to inform plans
Humans doing those plans
This being a successful strategy (eg, humans planning for IGF didn’t just overthink stuff and die)
This being “accessible to the genome” in some way. Eg you could turn up and down genetic knobs that increased peoples propensity for 1,2,3.
I’m saying (1) was not present, so (1,2,3) were clearly not present.
Its possible a proxy like seeing surviving grandkids was present, but that in that case (2,3) was not present.
In that case, my theory is consistent with the evidence, but doesn’t necessarily explain it better than other theories. That’s fine.
Wrt your “what actually caused it”
Stuff like compute or intelligence limitations are subcomponents of (2,3)
Path dependence this is what the whole post is about in some sense. Or the brain is conservative. Similarly, I think ML models are conservative, in the sense that: if you do “light” SFT/RL models will find the explanation of the samples you gave that fits with the base-model prior, and boost the underlying circuitry.
Okay, the “genetic knob” is maybe the right language. What I meant is that for evolution to be able to inner-align humans to IGF, you’d need
Humans occasionally wanting IGF, and using that to inform plans
Humans doing those plans
This being a successful strategy (eg, humans planning for IGF didn’t just overthink stuff and die)
This being “accessible to the genome” in some way. Eg you could turn up and down genetic knobs that increased peoples propensity for 1,2,3.
I’m saying (1) was not present, so (1,2,3) were clearly not present.
Its possible a proxy like seeing surviving grandkids was present, but that in that case (2,3) was not present.
In that case, my theory is consistent with the evidence, but doesn’t necessarily explain it better than other theories. That’s fine.
Wrt your “what actually caused it”
Stuff like compute or intelligence limitations are subcomponents of (2,3)
Path dependence this is what the whole post is about in some sense. Or the brain is conservative. Similarly, I think ML models are conservative, in the sense that: if you do “light” SFT/RL models will find the explanation of the samples you gave that fits with the base-model prior, and boost the underlying circuitry.
Does this make sense?