Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression.
This may be tangential to your point, but it’s worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent. A person’s ability to solve word puzzles correlates with her ability to solve math puzzles. So you can measure someone’s IQ by giving her a logic puzzle test, and the score will tell you a lot about the person’s general mental capabilities.
Because of that very special property, people feel more or less comfortable referring to “intelligence” as a tangible thing that impacts the real world. If you had to pick between two doctors to perform a life-or-death operation, and you knew that one had an IQ of 100 and the other an IQ of 160, you would probably go with the latter. Most people would feel comfortable with the statement “Harvard students are smarter than high school dropouts”, and make real-world predictions based on it (e.g a Harvard student is more likely to be able to write a good computer program than a high school dropout, even if the former didn’t study computer science).
The point is that there’s no reason this special domain-independence property of human intelligence should hold for non-human reasoning machines. So while it makes sense to score humans based on this “intelligence” quantity, it might be totally meaningless to attempt to do so for machines.
This may be tangential to your point, but it’s worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent.
Not so fast. Human intelligence is relatively domain independent. But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences. These reguarities make up a tiny sliver of the Platonic space of generating functions. By (correctly) assuming we’re in that sliver, we vastly improve our capabilities compared to if we were AIXIs lacking that knowledge.
Human intelligence appears strongly domain-indepdent because it generalizes to all the domains that we see. It does not generalize to the full set of computable environments—no intelligence can do that while still performing as well in each as we do in this environment.
Non-human animals are likewise “domain-independently intelligent” for the domains that they exist in. Most humans would die, for example, if dropped in the middle of the desert, ocean, or arctic.
But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences.
Not just by making assumptions: you can learn (domain-specific) optimizations that don’t introduce new info, but improve ability, allowing to understand more from the info you have (better conceptual pictures for natural science; math).
Another example of how domain-dependent human intelligence actually is, is optical illusions.
Optical illusions are when an image violates an assumption your brain is making to interpret visual data, causing it to misinterpret the image. And remember, this is only going slightly outside of the boundary of the assumptions your brain makes.
This may be tangential to your point, but it’s worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent. A person’s ability to solve word puzzles correlates with her ability to solve math puzzles. So you can measure someone’s IQ by giving her a logic puzzle test, and the score will tell you a lot about the person’s general mental capabilities.
Because of that very special property, people feel more or less comfortable referring to “intelligence” as a tangible thing that impacts the real world. If you had to pick between two doctors to perform a life-or-death operation, and you knew that one had an IQ of 100 and the other an IQ of 160, you would probably go with the latter. Most people would feel comfortable with the statement “Harvard students are smarter than high school dropouts”, and make real-world predictions based on it (e.g a Harvard student is more likely to be able to write a good computer program than a high school dropout, even if the former didn’t study computer science).
The point is that there’s no reason this special domain-independence property of human intelligence should hold for non-human reasoning machines. So while it makes sense to score humans based on this “intelligence” quantity, it might be totally meaningless to attempt to do so for machines.
Not so fast. Human intelligence is relatively domain independent. But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences. These reguarities make up a tiny sliver of the Platonic space of generating functions. By (correctly) assuming we’re in that sliver, we vastly improve our capabilities compared to if we were AIXIs lacking that knowledge.
Human intelligence appears strongly domain-indepdent because it generalizes to all the domains that we see. It does not generalize to the full set of computable environments—no intelligence can do that while still performing as well in each as we do in this environment.
Non-human animals are likewise “domain-independently intelligent” for the domains that they exist in. Most humans would die, for example, if dropped in the middle of the desert, ocean, or arctic.
Not just by making assumptions: you can learn (domain-specific) optimizations that don’t introduce new info, but improve ability, allowing to understand more from the info you have (better conceptual pictures for natural science; math).
Another example of how domain-dependent human intelligence actually is, is optical illusions.
Optical illusions are when an image violates an assumption your brain is making to interpret visual data, causing it to misinterpret the image. And remember, this is only going slightly outside of the boundary of the assumptions your brain makes.