So you think I’m underestimating the misalignment between replicators->vehicles? Not sure that undermines my central point, that we’re still fundamentally ignorant of our goal structures and that has really negative consequences for human->AI alignment.
Or do you think our misalignment with our genes is actually a cause for optimism when it comes to AI alignment somehow?
I don’t think it’s a cause for optimism, no. Was just responding to that specific point. I agree we are ignorant of our goal structures and that this has negative consequences for human->AI alignment.
So you think I’m underestimating the misalignment between replicators->vehicles? Not sure that undermines my central point, that we’re still fundamentally ignorant of our goal structures and that has really negative consequences for human->AI alignment.
Or do you think our misalignment with our genes is actually a cause for optimism when it comes to AI alignment somehow?
I don’t think it’s a cause for optimism, no. Was just responding to that specific point. I agree we are ignorant of our goal structures and that this has negative consequences for human->AI alignment.