Only recently has replicator->vehicle alignment begun to erode, due to a number of cultural reasons (reward hacking, increased gender equality, etc.), though none of these reasons were a conscious effort to rebel against the genetic mandate.
Despite the eroding alignment, the human population still managed to double in my lifetime, from roughly 4 to 8 billion. Growth rates are declining (but could reverse under the right conditions), but the population continues to grow, not projected to decline for decades. The replicators have managed to keep a relatively tight yoke on their smartest inventions, and this now has implications for the inventions of the inventions themselves.
This feels too cheery to me. Consider the diff between how many kids I could have, and how many kids I actually have. Calculate this diff for everyone in today’s society. I’m guessing that in the USA it’s something like 3x, i.e. on average people could be having 3x more kids if they made reproductive fitness their main life goal. That’s a big misalignment! That’s like a situation where our AGIs are supposed to be steering the world towards utopia on our behalf—defeating terrorists, aligning superintelligence, etc. -- but the package of things they do is only 1/3rd as good as what they would achieve if they actually made it their priority. That is, we’d be indifferent between this situation and a dice roll that gave us utopia on a 5 or 6 and human extinction otherwise. (Does this assume risk neutrality? Sorta but not in a problematic way, because genes are risk neutral w.r.t. # of children, AND because many Americans deliberately choose to have 0 kids. So I think the analogy checks out.)
So you think I’m underestimating the misalignment between replicators->vehicles? Not sure that undermines my central point, that we’re still fundamentally ignorant of our goal structures and that has really negative consequences for human->AI alignment.
Or do you think our misalignment with our genes is actually a cause for optimism when it comes to AI alignment somehow?
I don’t think it’s a cause for optimism, no. Was just responding to that specific point. I agree we are ignorant of our goal structures and that this has negative consequences for human->AI alignment.
This feels too cheery to me. Consider the diff between how many kids I could have, and how many kids I actually have. Calculate this diff for everyone in today’s society. I’m guessing that in the USA it’s something like 3x, i.e. on average people could be having 3x more kids if they made reproductive fitness their main life goal. That’s a big misalignment! That’s like a situation where our AGIs are supposed to be steering the world towards utopia on our behalf—defeating terrorists, aligning superintelligence, etc. -- but the package of things they do is only 1/3rd as good as what they would achieve if they actually made it their priority. That is, we’d be indifferent between this situation and a dice roll that gave us utopia on a 5 or 6 and human extinction otherwise. (Does this assume risk neutrality? Sorta but not in a problematic way, because genes are risk neutral w.r.t. # of children, AND because many Americans deliberately choose to have 0 kids. So I think the analogy checks out.)
So you think I’m underestimating the misalignment between replicators->vehicles? Not sure that undermines my central point, that we’re still fundamentally ignorant of our goal structures and that has really negative consequences for human->AI alignment.
Or do you think our misalignment with our genes is actually a cause for optimism when it comes to AI alignment somehow?
I don’t think it’s a cause for optimism, no. Was just responding to that specific point. I agree we are ignorant of our goal structures and that this has negative consequences for human->AI alignment.