Do you mean to say that only something that approximates human intelligence can initiate an “AI takeoff”? If so, can you summarize your reasons for believing that?
So this is a valid point that betrays a possible unjustified leap in logic on my part. I think the thought process (although honestly I haven’t thought about it that much) is something to the effect that any sufficiently powerful optimizer such that it can self-optimize for a substantial take-off is going to have to be able to predict and interact well enough with its environment that it will need to effectively solve the natural language problem and talk to humans (we are after all a major part of its environment until/unless it decides that we are redundant). But the justification for this is to some extent just weak intuition and the known sample of mind-space is very small, so intuitions informed by such experience should be suspect.
I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.
So this is a valid point that betrays a possible unjustified leap in logic on my part. I think the thought process (although honestly I haven’t thought about it that much) is something to the effect that any sufficiently powerful optimizer such that it can self-optimize for a substantial take-off is going to have to be able to predict and interact well enough with its environment that it will need to effectively solve the natural language problem and talk to humans (we are after all a major part of its environment until/unless it decides that we are redundant). But the justification for this is to some extent just weak intuition and the known sample of mind-space is very small, so intuitions informed by such experience should be suspect.
(nods) Yeah, agreed.
I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn’t resemble a human mind?
I’m not sure.
I think if I had a better grasp of whether and why I think humans are (aren’t) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.