Below you ask whether the definition of intelligence per se is important at all; it seems it’s not, and this may be some indication of how to measure what you actually care about.
I am a little curious that the “seven kinds of intelligence” (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.… Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?
Particularly in many approaches to AI, which seem to view, almost a priori (I’ll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) “component” features of intelligent agents as we conceive of them, or find them naturalistically. Thus, (i) machine “visual” object recognition (wavelength band… up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition—another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty… are, that is, problems from both easy and harder domains, as has been gradually—sometimes unexpectedly—revealed by the school of hard knocks, during the decades of AI engineering attempts.)
It seems that actors sympathetic to the top-down, “piecemeal” approach popular in much of the AI community would have jumped at this way of supplanting the ersatz “G”—as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence—with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.
Any reason we aren’t debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom’s book)?
IQ tests verify inbuilt biases of the one doing the questioning. I have failed these gloriously yet got distinctions at uni. Tests per se mean nothing. [I blame psychologists]. As for non human systems they may mimic intelligence but unless they have sentience they will remain machines. [luckily]
I scored an IQ of 60 at school. I was thinking too complex around the corner. Same experience I had with a Microsoft “computer driving license” test. I totally failed because I answered based on my knowledge of IT forensic possibilities. E.g. Question: If you delete a file in Windows trash bin: Is the file recoverable? If you want to pass this test you have to give the wrong answer no.
These examples show: We need cascaded test hierarchies:
Survival was and is the challenge of evolution. Higher intelligence gives more options to cope with deadly dangers.
To measure intelligence we should challenge AI entities using standardized tests. To develop these tests will become a new field of research. IQ tests are not suitable because of their anthropocentrism. Tests should analyze capabilities how good and fast real world problems are solved.
How would you start to measure intelligence in non-human systems, such as groups of humans?
One proposal goes that one measures predictive/reward-seeking ability on random small Turing machines: http://lesswrong.com/lw/42t/aixistyle_iq_tests/
Below you ask whether the definition of intelligence per se is important at all; it seems it’s not, and this may be some indication of how to measure what you actually care about.
Maybe a good starting point would be IQ tests?
I am a little curious that the “seven kinds of intelligence” (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.… Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?
Particularly in many approaches to AI, which seem to view, almost a priori (I’ll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) “component” features of intelligent agents as we conceive of them, or find them naturalistically.
Thus, (i) machine “visual” object recognition (wavelength band… up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition—another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty… are, that is, problems from both easy and harder domains, as has been gradually—sometimes unexpectedly—revealed by the school of hard knocks, during the decades of AI engineering attempts.)
It seems that actors sympathetic to the top-down, “piecemeal” approach popular in much of the AI community would have jumped at this way of supplanting the ersatz “G”—as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence—with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.
Any reason we aren’t debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom’s book)?
Bring these questions back up in later discussions!
Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....
IQ tests verify inbuilt biases of the one doing the questioning. I have failed these gloriously yet got distinctions at uni. Tests per se mean nothing. [I blame psychologists]. As for non human systems they may mimic intelligence but unless they have sentience they will remain machines. [luckily]
Do you see a possible contradiction here?
I scored an IQ of 60 at school. I was thinking too complex around the corner. Same experience I had with a Microsoft “computer driving license” test. I totally failed because I answered based on my knowledge of IT forensic possibilities. E.g. Question: If you delete a file in Windows trash bin: Is the file recoverable? If you want to pass this test you have to give the wrong answer no.
These examples show: We need cascaded test hierarchies:
classification test
test with adapted complexity level
Survival was and is the challenge of evolution. Higher intelligence gives more options to cope with deadly dangers.
To measure intelligence we should challenge AI entities using standardized tests. To develop these tests will become a new field of research. IQ tests are not suitable because of their anthropocentrism. Tests should analyze capabilities how good and fast real world problems are solved.