My interpretation is that you’re 99% of the way there in terms of work required if you start out with humans rather than creating a de novo mind, even if many/most humans currently or historically are not “aligned”. Like, you don’t need very many bits of information to end up with a nice “aligned” human. E.g. maybe you lightly select their genome for prosociality + niceness/altruism + wisdom, and treat them nicely while they’re growing up, and that suffices for the majority of them.
I’d actually maybe agree with this, though with the caveat that there’s a real possibility you will need a lot more selection/firepower as a human gets smarter, because you lack the ability to technically control humans in the way you can control AIs.
I’d probably bump that down to O(90%) at max, and this could get worse (I’m downranking based on the number of psychopaths/sociopaths and narcissists that exist).
My interpretation is that you’re 99% of the way there in terms of work required if you start out with humans rather than creating a de novo mind, even if many/most humans currently or historically are not “aligned”. Like, you don’t need very many bits of information to end up with a nice “aligned” human. E.g. maybe you lightly select their genome for prosociality + niceness/altruism + wisdom, and treat them nicely while they’re growing up, and that suffices for the majority of them.
I’d actually maybe agree with this, though with the caveat that there’s a real possibility you will need a lot more selection/firepower as a human gets smarter, because you lack the ability to technically control humans in the way you can control AIs.
Also true, though maybe only for O(99%) of people.
I’d probably bump that down to O(90%) at max, and this could get worse (I’m downranking based on the number of psychopaths/sociopaths and narcissists that exist).