How confident are you in our ability, supposing everyone mysteriously possessed the will to do so or we somehow implemented such a program against people’s wills, to implement a eugenics program that resulted in, say, as much as a 5% improvement in either the maximum measured intelligence and conscientiousness in the population, or as much as a 5% increase in the frequency of the highest-measured I-and-C ratings (or had some other concretely articulated target benefit, if those aren’t the right ones) in less than, say, five generations?
Very high, due to the Flynn Effect. Humans are already recursively self-improving. The problem is that the self-improvement is too slow compared to the upper bound of what we might see from a recursively self-improving AI.
How confident are you in our ability, supposing everyone mysteriously possessed the will to do so or we somehow implemented such a program against people’s wills, to implement a eugenics program that resulted in, say, as much as a 5% improvement in either the maximum measured intelligence and conscientiousness in the population, or as much as a 5% increase in the frequency of the highest-measured I-and-C ratings (or had some other concretely articulated target benefit, if those aren’t the right ones) in less than, say, five generations?
Hsu seems pretty confident (http://lesswrong.com/lw/7wj/get_genotyped_for_free_if_your_iq_is_high_enough/5s84) but not due to the Flynn Effect (which may have stalled out already).
Very high, due to the Flynn Effect. Humans are already recursively self-improving. The problem is that the self-improvement is too slow compared to the upper bound of what we might see from a recursively self-improving AI.