Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won’t be any less ethical than normal children).
Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.
I shouldn’t have said “basically not a problem at all.” I should have said “not much more of a problem than the problem we already face with our own children.” I agree that selecting for intelligence might have side-effects on other parameters. But seems to me those side-effects will likely be small and perhaps even net-positive (it’s not like Einstein, von Neumann, etc. were psychopaths. They seemed pretty normal, as far as values were concerned.) Certainly we should be much more optimistic about the alignment-by-default of engineered humans than the alignment of some massive artificial neural net.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don’t get sub-agent alignment for free, whether it’s made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.
I shouldn’t have said “basically not a problem at all.” I should have said “not much more of a problem than the problem we already face with our own children.” I agree that selecting for intelligence might have side-effects on other parameters. But seems to me those side-effects will likely be small and perhaps even net-positive (it’s not like Einstein, von Neumann, etc. were psychopaths. They seemed pretty normal, as far as values were concerned.) Certainly we should be much more optimistic about the alignment-by-default of engineered humans than the alignment of some massive artificial neural net.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don’t get sub-agent alignment for free, whether it’s made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
OK, yeah, fair enough. Still though, the danger seems less than it is in the machine intelligence case.
Evolution might have optimized intelligence, if there were no trade-offs.