Why are we throwing the word “Intelligence” around like it actually means anything? The concept is so ill-defined It should be in the same set with “Love.”
I can’t tell whether you’re complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it’s an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer’s working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.
When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of “good” is basically the ultimate goal of MIRI.
I feel that perhaps you are operating on a different definition of unpack than I am. For me, “can be good at everything” is less evocative than “achieves its value when presented with a wide array of environments” in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific:
Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.
You can use the “can be good at everything” definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it’s just using more technical terms to mean the same thing.
Because it actually does mean something, even if we don’t really know what and borders are fuzzy.
When you hear that X is more intelligent than Y, there is some information you learn, even though you didn’t find out exactly what can X do that Y can’t.
Note that we also use words like “mass” and “gravity” and “probability”; even though we know lots about each, it’s not at all clear what they are (or, like in the case of probability, there are conflicting opinions).
I’m not really sure why you use “love” as an example. I don’t know that much about neurology, but my understand is that the chemical makeup of love and its causes are pretty well understood. Certainly better understood than intelligence?
I think what you talk about here is certain aspects of sexual attraction. Which are, indeed, often lumped together into the concept of “Love”. Just like a lot of different stuff is lumped together into the concept of “Intelligence”.
This seems like matching “chemistry” to “sexual” in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.
Lots of eye contact is sufficient if the people are both single, of similar age, and with a person of their preferred gender. But even those conditions could be overcome given some chemicals to play with.
The fact that English uses the same word for several concepts (which had different names in, say, ancient Greek) doesn’t necessarily mean that we’re confused about neuropsychology.
There seems to be a thing called “competence” for particular abstract tasks. Further, there are kinds of tasks where competence in one task generalizes to the whole class of tasks. One thing we try to measure by intelligence is an individual’s level of generalized abstract competence.
I think part of the difficulties with measuring intelligence involve uncertainty about what tasks are within the generalization class.
Why are we throwing the word “Intelligence” around like it actually means anything? The concept is so ill-defined It should be in the same set with “Love.”
I can’t tell whether you’re complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it’s an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer’s working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.
“Efficient cross-domain optimization” is just fancy words for “can be good at everything”.
Yes. And your point is?
This is the stupid questions thread.
That would be the inefficient cross-domain optimization thread.
Awesome. I need to use this as a swearword sometimes...
“You inefficient cross-domain optimizer, you!”
achieves its value when presented with a wide array of environments.
This is again different words for “can be good at everything”. :-)
When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of “good” is basically the ultimate goal of MIRI.
I just showed that your redefinition does not actually unpack anything.
I feel that perhaps you are operating on a different definition of unpack than I am. For me, “can be good at everything” is less evocative than “achieves its value when presented with a wide array of environments” in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.
You can use the “can be good at everything” definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it’s just using more technical terms to mean the same thing.
Because it actually does mean something, even if we don’t really know what and borders are fuzzy.
When you hear that X is more intelligent than Y, there is some information you learn, even though you didn’t find out exactly what can X do that Y can’t.
Note that we also use words like “mass” and “gravity” and “probability”; even though we know lots about each, it’s not at all clear what they are (or, like in the case of probability, there are conflicting opinions).
All language is vague. Sometimes vague language hinders us in understanding what another person is saying and sometimes it doesn’t.
Legg & Hutter have given a formal definition of machine intelligence. A number of authors have expanded on it and fixed some of its problems: see e.g. this comment as well as the parent post.
I’m not really sure why you use “love” as an example. I don’t know that much about neurology, but my understand is that the chemical makeup of love and its causes are pretty well understood. Certainly better understood than intelligence?
I think what you talk about here is certain aspects of sexual attraction. Which are, indeed, often lumped together into the concept of “Love”. Just like a lot of different stuff is lumped together into the concept of “Intelligence”.
This seems like matching “chemistry” to “sexual” in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.
?
Lots of eye contact is sufficient if the people are both single, of similar age, and with a person of their preferred gender. But even those conditions could be overcome given some chemicals to play with.
[citation needed]
Did you accidentally leave out some conditions such as “reasonably attractive”?
The fact that English uses the same word for several concepts (which had different names in, say, ancient Greek) doesn’t necessarily mean that we’re confused about neuropsychology.
There seems to be a thing called “competence” for particular abstract tasks. Further, there are kinds of tasks where competence in one task generalizes to the whole class of tasks. One thing we try to measure by intelligence is an individual’s level of generalized abstract competence.
I think part of the difficulties with measuring intelligence involve uncertainty about what tasks are within the generalization class.