Isn’t the expectation of encountering intelligence so advanced, that it’s perfect and infallible essentially the expectation of encountering God?
Which god? If by “God” you mean “something essentially perfect and infallible,” then yes. If by “God” you mean “that entity that killed a bunch of Egyptian kids” or “that entity that’s responsible for lightning” or “that guy that annoyed the Roman empire 2 millennia ago,” then no.
Also, essentially infallible to us isn’t necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).
Which god? If by “God” you mean “something essentially perfect and infallible,” then yes.
That one. Big man in sky invented by shepherds does’t interest me much. Just because I’m a better optimizer of resources in certain contexts than an amoeba doesn’t make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn’t make X perfect and infallible. Just because X can rapidly optimize itself doesn’t make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.
Y’know, I’m not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I’ve never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).
Which god? If by “God” you mean “something essentially perfect and infallible,” then yes. If by “God” you mean “that entity that killed a bunch of Egyptian kids” or “that entity that’s responsible for lightning” or “that guy that annoyed the Roman empire 2 millennia ago,” then no.
Also, essentially infallible to us isn’t necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).
That one. Big man in sky invented by shepherds does’t interest me much. Just because I’m a better optimizer of resources in certain contexts than an amoeba doesn’t make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn’t make X perfect and infallible. Just because X can rapidly optimize itself doesn’t make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.
Y’know, I’m not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I’ve never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).
“optimization power” is more-or-less equivalent to “intelligence”, in local parlance. Do you have a different definition of intelligence in mind?
One that doesn’t classify evolution as intelligent.
So the nonapples theory of intelligence, then?
More generally, a theory that requires modeling of the future for something to be intelligent.