I really should have done a better job explaining this in the original comment; it’s not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same “direction” within some higher dimensional vector space, you’ll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains.
I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you’re probably going to get one generation of enhanced humans before we make the machine god.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is?
I think it’s neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc)
I plan to leave those harder questions to the next generation. It’s enough to just go after the really easy wins.
additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
Manipulation of others by enhanced humans is somewhat of a concern, but I don’t think it’s for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one’s will is a common and time-honored tactic to make that happen.
In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editing, but the issue is personality traits have a unique genetic structure with lots of non-linear interactions. That means you need larger sample sizes to figure out what genes need editing.
I really should have done a better job explaining this in the original comment; it’s not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same “direction” within some higher dimensional vector space, you’ll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains.
I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you’re probably going to get one generation of enhanced humans before we make the machine god.
I think it’s neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc)
I plan to leave those harder questions to the next generation. It’s enough to just go after the really easy wins.
Manipulation of others by enhanced humans is somewhat of a concern, but I don’t think it’s for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one’s will is a common and time-honored tactic to make that happen.
In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editing, but the issue is personality traits have a unique genetic structure with lots of non-linear interactions. That means you need larger sample sizes to figure out what genes need editing.