First of all, humans are very far from being general intelligences. But if we picked a random complicated Turing machine from the space of such machines, we’d probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI.
I wonder about this. It sounds plausible, but getting reasonable scores also seems plausible—perhaps even more plausible to me if you allow a human with a computer. It is probably quite sensitive to permitted thinking time. (I’m assuming that the ‘scale of intelligence’ you talk about is Legg’s AIQ.)
It is the kind of thing we could test empirically, but it’s not clear that this would be a good use of resources. How decision-relevant is it for us whether humans are general intelligences?
It’s relevant to exposing some maybe unjustified metaphors. And, actually, if humans were generally intelligent, especially without computers, this would a) surprise me and b) be strong evidence for a single-ish scale of intelligence.
I wonder about this. It sounds plausible, but getting reasonable scores also seems plausible—perhaps even more plausible to me if you allow a human with a computer. It is probably quite sensitive to permitted thinking time. (I’m assuming that the ‘scale of intelligence’ you talk about is Legg’s AIQ.)
It is the kind of thing we could test empirically, but it’s not clear that this would be a good use of resources. How decision-relevant is it for us whether humans are general intelligences?
It’s relevant to exposing some maybe unjustified metaphors. And, actually, if humans were generally intelligent, especially without computers, this would a) surprise me and b) be strong evidence for a single-ish scale of intelligence.