So to summarize, you conclusion seems to be that we should build an arbitrary-goals AI as soon as possible.
Huh? What exactly do you think you are summarizing? If you want to produce a cartoon version of my opinions on this thread, try “We should do all we can to avoid the FOOMing singleton scenario, instead trying to create a society of reproducing AIs, interlocked with each other and with humanity by a network of dependencies. If we do, the details of the initial goal systems may matter less than they would with a singleton.”
I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built. Makes sense (as a position, not something I agree with).
But the whole point of my posting was that, if there is convergence (in the second sense) then those initial values may make very little difference in the outcome of the universe
I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built.
Well, it is both. Convergence in the sense of “outcome is independent of the starting point” has not been proved for any AI/updating architecture. Also, I strongly suspect that the detailed outcome will depend quite a bit on the way AIs interact and produce successors/self-updates, even if the fact of convergence does not.
We should do all we can to avoid the FOOMing singleton scenario, instead trying to create a society of reproducing AIs, interlocked with each other and with humanity by a network of dependencies.
That reminds me of:
“An AGI raised in a box could become dangerously solipsistic, probably better to raise AGIs embedded in the social network...”
Goertzel’s comment doesn’t even make sense to me. Why is he placing ‘in a box’ in contraposition to ‘embedded in the social network’. The two issues are orthogonal. AIs can be social or singleton—either in a box or in the real world. ETA: Well, if you mean the human social network, then I suppose a boxed AI cannot participate. Though I suppose we could let some simulated humans into the box to keep the AI company.
Besides, I’ve never really considered solipsists to be any more dangerous than anyone else.
So to summarize, your conclusion seems to be that we should build an arbitrary-goals AI as soon as possible.
Edit: Wrong, corrected here.
Huh? What exactly do you think you are summarizing? If you want to produce a cartoon version of my opinions on this thread, try “We should do all we can to avoid the FOOMing singleton scenario, instead trying to create a society of reproducing AIs, interlocked with each other and with humanity by a network of dependencies. If we do, the details of the initial goal systems may matter less than they would with a singleton.”
I see, so “if there is convergence” is not a point of theoretical uncertainty, but something that depends on the way the AIs are built. Makes sense (as a position, not something I agree with).
Well, it is both. Convergence in the sense of “outcome is independent of the starting point” has not been proved for any AI/updating architecture. Also, I strongly suspect that the detailed outcome will depend quite a bit on the way AIs interact and produce successors/self-updates, even if the fact of convergence does not.
That reminds me of:
“An AGI raised in a box could become dangerously solipsistic, probably better to raise AGIs embedded in the social network...”
http://twitter.com/#!/bengoertzel/status/30077904524148736
Goertzel’s comment doesn’t even make sense to me. Why is he placing ‘in a box’ in contraposition to ‘embedded in the social network’. The two issues are orthogonal. AIs can be social or singleton—either in a box or in the real world. ETA: Well, if you mean the human social network, then I suppose a boxed AI cannot participate. Though I suppose we could let some simulated humans into the box to keep the AI company.
Besides, I’ve never really considered solipsists to be any more dangerous than anyone else.
“Now I will destroy the whole world—What a Bokononist says before committing suicide.”
We don’t have any half-decent simulated humans, though.