No, I mean 1700. There are literally that many variants. On the order of 20,000 or so.
You’re correct of course that if we don’t see some kind of pause, gene editing is probably not going to help.
But you don’t need a multi-generational one for it to have a big effect. You could create people smarter than any that have ever lived in a single generation.
(I believe that WBE can get all the way to a positive singularity—a group of WBE could self optimize, sharing the latest HW as it became available in a coordinated fashion so no-one or group would get a decisive advantage. This would get easier for them to coordinate as the WBE got more capable and rational.)
Maybe, but my impression is whole brain emulation is much further out technologically speaking than gene editing. We already have basically all the tools necessary to do genetic enhancement except for a reliable way to convert edited cells into sperm, eggs, or embryos. Last I checked we JUST mapped the neuronal structure of fruit flies for the first time last year and it’s still not enough to recreate the functionality of the fruit fly because we’re still missing the connectome.
Maybe some alternative path like high fidelity fMRI will yield something. But my impression is that stuff is pretty far out.
I also worry about the potential for FOOM with uploads. Genetically engineered people could be very, very smart, but they can’t make a million copies of themselves in a few hours. There are natural speed limits to biology that make it less explosive than digital intelligence.
Why should you believe an IQ 200 can control 400 any more than IQ 80 could control 200? (And if you believe gene editing can get IQ 600, then you must believe the AI can self optimize well above that. However I think there is almost no chance you will get that high because diminishing returns, correlated changes etc)
The hope is of course that at some point of intelligence we will discover some fundamental principles that give us confidence our current alignment techniques will extrapolate to much higher levels of intelligence.
Additionally there is unknown X and S risk from a multi-generational pause with our current tech. Once a place goes bad like N Korea, then tech means there is likely no coming back. If such centralization is a one way street, then with time an every larger % of the world will fall under such systems, perhaps 100%.
This is an interesting take that I hadn’t heard before, but I don’t really see any reason to think our current tech gives a big advantage to autocracy. The world has been getting more democratic and prosperous over time. There are certainly local occasional reversals, but I don’t see any compelling reason to think we’re headed towards a permanent global dictatorship with current tech.
I agree the risk of a nuclear war is still concerning (as is the risk of an engineered pandemic), but these risks seemed dwarfed by those presented by AI. Even if we create aligned AGI, the default outcome IS a global dictatorship, as the economic incentives are entirely pointed towards aligning it with its creators and controllers as opposed to the rest of humanity.
OK I guess there is a massive disagreement between us on what IQ increases gene changes can achieve. Just putting it out there, if you make an IQ 1700 person they can immediately program an ASI themselves, have it take over all the data centers rule the world etc.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is? Otherwise with increasing IQ there is the potential that it could think deeply and change its values, additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
I really should have done a better job explaining this in the original comment; it’s not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same “direction” within some higher dimensional vector space, you’ll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains.
I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you’re probably going to get one generation of enhanced humans before we make the machine god.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is?
I think it’s neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc)
I plan to leave those harder questions to the next generation. It’s enough to just go after the really easy wins.
additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
Manipulation of others by enhanced humans is somewhat of a concern, but I don’t think it’s for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one’s will is a common and time-honored tactic to make that happen.
In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editing, but the issue is personality traits have a unique genetic structure with lots of non-linear interactions. That means you need larger sample sizes to figure out what genes need editing.
No, I mean 1700. There are literally that many variants. On the order of 20,000 or so.
You’re correct of course that if we don’t see some kind of pause, gene editing is probably not going to help.
But you don’t need a multi-generational one for it to have a big effect. You could create people smarter than any that have ever lived in a single generation.
Maybe, but my impression is whole brain emulation is much further out technologically speaking than gene editing. We already have basically all the tools necessary to do genetic enhancement except for a reliable way to convert edited cells into sperm, eggs, or embryos. Last I checked we JUST mapped the neuronal structure of fruit flies for the first time last year and it’s still not enough to recreate the functionality of the fruit fly because we’re still missing the connectome.
Maybe some alternative path like high fidelity fMRI will yield something. But my impression is that stuff is pretty far out.
I also worry about the potential for FOOM with uploads. Genetically engineered people could be very, very smart, but they can’t make a million copies of themselves in a few hours. There are natural speed limits to biology that make it less explosive than digital intelligence.
The hope is of course that at some point of intelligence we will discover some fundamental principles that give us confidence our current alignment techniques will extrapolate to much higher levels of intelligence.
This is an interesting take that I hadn’t heard before, but I don’t really see any reason to think our current tech gives a big advantage to autocracy. The world has been getting more democratic and prosperous over time. There are certainly local occasional reversals, but I don’t see any compelling reason to think we’re headed towards a permanent global dictatorship with current tech.
I agree the risk of a nuclear war is still concerning (as is the risk of an engineered pandemic), but these risks seemed dwarfed by those presented by AI. Even if we create aligned AGI, the default outcome IS a global dictatorship, as the economic incentives are entirely pointed towards aligning it with its creators and controllers as opposed to the rest of humanity.
OK I guess there is a massive disagreement between us on what IQ increases gene changes can achieve. Just putting it out there, if you make an IQ 1700 person they can immediately program an ASI themselves, have it take over all the data centers rule the world etc.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is? Otherwise with increasing IQ there is the potential that it could think deeply and change its values, additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
I really should have done a better job explaining this in the original comment; it’s not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same “direction” within some higher dimensional vector space, you’ll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains.
I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you’re probably going to get one generation of enhanced humans before we make the machine god.
I think it’s neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc)
I plan to leave those harder questions to the next generation. It’s enough to just go after the really easy wins.
Manipulation of others by enhanced humans is somewhat of a concern, but I don’t think it’s for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one’s will is a common and time-honored tactic to make that happen.
In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editing, but the issue is personality traits have a unique genetic structure with lots of non-linear interactions. That means you need larger sample sizes to figure out what genes need editing.