I decided to read through the essays on facingthesingularity, and I have found more faults than I care to address. Also, I can see why you might think that the workings of the human mind are simple, given that the general attitude here is that you should go around maximizing your “utility function”. That is utter and complete nonsense for reasons that deserve their own blog post. What I see more than anything is a bunch of ex-christians worshipping their newfound hypothetical machine god, and doing so by lowering themselves to the level of machine rather than raising machine to the level of man.
I’ll give one good example to make clear what I mean:
(from facingthesingularity)
But that can’t possibly be correct. The probability of Linda being a bank teller can’t be less than the probability of her being a bank teller and a feminist.
This is my “Humans are crazy” Exhibit A: The laws of probability theory dictate that as a story gets more complicated, and depends on the truth of more and more claims, its probability of being true decreases. But for humans, a story often seems more likely as it is embellished with details that paint a compelling story: “Linda can’t be just a bank teller; look at her! She majored in philosophy and participated in antinuclear demonstrations. She’s probably a feminist bank teller.”
But, the thing is, context informs us that while a philosophy major is unlikely to work for a bank, a feminist is much more likely to work a “pink collar job” such as secretarial work or as a bank teller, where they can use the state to monger for positions, pay and benefits above and beyond what they deserve. A woman who otherwise would have no interest in business or finance, when indoctrinated by the feminist movement, will leap to take crappy office jobs so they can raise their fists in the air in onionistic fashion against the horrible man-oppression they righteously defeated with their superior women intellects. The simple fact that “philosophy” in a modern school amounts to “The History of Philosophy”, and is utterly useless might also clue one in on the integrity or lack thereof that a person might have, although of course it isn’t conclusive.
In short, impressive “logical” arguments about how probabilities of complements must be additive can only be justified in a vacuum without context, a situation that does not exist in the real world.
None of that fog obscures the basic fact that the number of feminist female bank tellers cannot possibly be greater than the number of female bank tellers. The world is complex, but that does not mean that there are no simple truths about it. This is one of them.
People have thought up all manner of ways of exonerating people from the conjunction fallacy, but if you go back to Eliezer’s twoposts about it, you will find some details of the experiments that have been conducted. His conclusion:
The conjunction fallacy is probably the single most questioned bias ever introduced, which means that it now ranks among the best replicated. The conventional interpretation has been nearly absolutely nailed down. Questioning, in science, calls forth answers.
The conjunction error is an error, and people do make it.
I reread that section, and you are correct, given that they don’t tell you whether or not she is a feminist, it cannot be used as a criterion to determine whether or not she is a banker. However, I would say that the example, in typical public education style, is loaded and begs an incorrect answer. Since the only data you are given is insufficient to draw any conclusions, the participant is lead to speculate without understanding the limitations of the question.
As for “utility function”, there are at least three reasons why it is not just wrong, but entirely impossible.
1: Utility is heterogeneous. Which gives you more “utility”, a bowl of ice cream or a chair? The question itself is nonsensical, the quality/type of utility gained from a chair and a bowl of ice cream are entirely different.
2: Utility is complementary. If I own a field, the field by itself may be useless to me. Add a picnic table, and some food and suddenly the field gains utility beyond the food, table, and field individually. Perhaps I could run horses through the field, or add some labor and intelligent work and turn it into a garden, but the utility I get from it depends on my preferences (which may change) and the combination with other resources and a plan. Another example, a person who owns a yaht would probably get more “utility” out of going to the ocean than someone who does not.
3: Utility is marginal. For the first three scoops of ice cream, I’d say I get equal “utility” from each. The fourth scoop yields comparably less “utility” than the previous three, and by the fifth the utility becomes negative, as I feel sick afterwards. By six scoops I’m throwing away ice cream. On the other hand, if I have 99 horses, whether I gain or lose one would not make much difference as to the utility I get from them, but if I only have 2 horses, losing one could mean losing more than half of my utility. Different things have different useful quantities in different situations depending on how they are used.
4: Utility cannot be measured. This should be obvious. Even if we were to invent a magical brain scanner that could measure brain activity in high resolution in vivo, utility is not always the same for the same thing every time it is experienced, and you still have the apples-oranges problem that makes the comparison meaningless to begin with.
5: Human psychology is not a mere matter of using logic correctly or not. In this case, it is definitely a misapplication, but it seems the only psychology that gets any attention around here is anecdotes from college textbooks on decisions and some oversimplified mechanistic theorizing from neuroscience. You talk about anchoring like it’s some horrible disease, when it’s the same fundamental process required for memory and mastery of concepts. You’ve probably heard of dissociation but you probably wouldn’t believe me if I told you that memory can be flipped on and off like a light switch at the whim of your unconscious.
That aside, treating intelligence as a machine that optimizes things is missing the entire point of intelligence. If you had ever read Douglas Hofstadter’s “Godel Escher Bach”, or Christopher Alexander’s “The Nature of Order” series, you might have a greater appreciation for the role that abstract pattern recognition and metaphor plays in intelligence.
Finally, I read two “papers” from SI, and found them entirely unprofessional. They were both full of vague terminology and unjustified assertions and were written in a colloquial style that pretty much begs the reader to believe the crap they’re spewing. You get lots of special graphs showing how a superhuman AI would be something like two orders of magnitude more intelligent than humans, but no justification for how these machines will magically be able to produce the economic resources to reach that level of development “overnight”. Comparing modern “AIs” to mice is probably the most absurd fallacy I’ve seen thus far. Even the most sophisticated AI for driving cars cannot drive on a real road, its “intelligence” is overall still lacking in sophistication compared to a honey bee, and the equipment required to produce its rudimentary driving skills far outweigh the benefits. Computer hardware may improve regularly by Moore’s Law, but the field of AI research does not, and there is no evidence that we will see a jump in computer intelligence from below insects to above orangutans any time soon. When we do, it will probably take them 50-100 years to leave us fully at orangutan level.
Even the most sophisticated AI for driving cars cannot drive on a real road,
This is false. Though currently there are situations that may come up that will prompt it to give up control to the human driver, and there are some situations (such as high reflectivity / packed snow) that they can’t handle yet.
1: Utility is heterogeneous. Which gives you more “utility”, a bowl of ice cream or a chair? The question itself is nonsensical, the quality/type of utility gained from a chair and a bowl of ice cream are entirely different.
It’s not nonsensical; it means “would you rather have a bowl of ice cream or a chair?” Of course the answer is “it depends”, but no-one ever claimed that U(x + a bowl of ice cream) − U(x) doesn’t depend on x.
treating intelligence as a machine that optimizes things is missing the entire point of intelligence. If you had ever read Douglas Hofstadter’s “Godel Escher Bach”, or Christopher Alexander’s “The Nature of Order” series, you might have a greater appreciation for the role that abstract pattern recognition and metaphor plays in intelligence.
Eliezer has read GEB and praised it above the mountains (literally). So a charitable reader of him and his colleagues might suppose that they know the point about pattern recognition, but do not see the connection that you find obvious. And in fact I don’t know what you’re responding to, or what you think your second quoted sentence has to do with the first, or what practical conclusion you draw from it through what argument. Perhaps you could spell it out in detail for us mortals?
In short, impressive “logical” arguments about how probabilities of complements must be additive can only be justified in a vacuum without context, a situation that does not exist in the real world.
Every “context” can be described as a set of facts and parameters, AKA more data. Perfect data on the context means perfect information. Perfect information means perfect choice and perfect predictions. Sure, it might seem to you like the logical arguments expressed are “too basic to apply to the real world”, but a utility function is really only ever “wrong” when it fails to apply the correct utility to the correct element (“sorting out your priorities”), whether that’s by improper design, lack of self-awareness, missing information or some other hypothetical reason.
For every “no but theory doesn’t apply to the real world” or “theory and practice are different” argument, there is always an explanation for the proposed difference between theory and reality, and this explanation can be included in the theory. The point isn’t to throw out reality and use our own virtual-theoretical world. It’s to update our model (the theory) in the most sane and rational way, over and over again (constantly and continuously) so that we get better.
Likewise, maximizing one’s own utility function is not the reduce-oneself-to-machine-worshipper-of-the-machine-god that you seem to believe. I have emotions, I get angry, I get irritated (e.g. at your response*), I am happy, etc. Yet it appears that for several years, in hindsight, I’ve been maximizing my utility function without knowing that that’s how it’s called (and learning the terminology and more correct/formal ways of talking about it once I started reading LessWrong).
Your “utility function” is not one simple formula that you use to plug in values to variables, compute, and then call it a decision. The utility function of a person is the entire, general completeness of what that person wants and desires and values. If I tried to write down for you my own utility function, it would be both utterly incomprehensible and probably ridiculously ugly. That’s assuming I’d even be capable of writing it all down—limited self-awareness, biases, continuous change, and all that stuff.
To put it all in perspective, “maximizing one’s utility function” is very much equivalent to “according to what information you have, spend as much time as you think is worth taking deciding on the probably-best course of action available, and then act on it, such that in hindsight you’ll have maximized your chances of reaching your own objectives”. This doesn’t mean obtaining perfect information or never being wrong or worshipping a formula. It simply means living your own life, in your own way, with better (and improving) awareness of yourself and updating (changing) your own beliefs when they’re no longer correct so that you can act and behave more rationally. In this optic, LessWrong is essentially a large self-help group for normal people who just want to be better at knowing things and making decisions in general.
On a last note, FacingTheSingularity does not contain a bunch of scientific essays that would be the end answer to all singularity concerns. At best, it could be considered as one multi-chapter essay going through various points to support the primary thesis that the one author believes that the various experts are right about the Singularity being “imminent” (within this century at the outset). This is clearly stated on the front page, which is also the table of contents. As I’ve said in my previous reply, it’s a good vulgarized introduction. However, the real meat comes from the SingInst articles, essays and theses, as well as some of the more official stuff on LessWrong. Eliezer’s Timeless Decision Theory paper is a good example of more rigorous and technical writing, though it’s by far not the most relevant, nor do I think it’s the first one that a newcomer should read. If you’re interested in possible AI decision-making techniques, though, it’s a very interesting and pertinent reading.
*(I was slightly irritated that I failed to fully communicate my point and at the dismissal of long-thought-and-debated theories, including beliefs I’ve revalidated time and time again over the years, along with the childish comment on ex-christians and their “machine god”. This does not mean, however, that I transpose this irritation towards you or some other, unrelated outlet. My irritation is my own and a product of my own mental models.)
Edit: Fixed some of the text and added missing footnote.
I decided to read through the essays on facingthesingularity, and I have found more faults than I care to address. Also, I can see why you might think that the workings of the human mind are simple, given that the general attitude here is that you should go around maximizing your “utility function”. That is utter and complete nonsense for reasons that deserve their own blog post. What I see more than anything is a bunch of ex-christians worshipping their newfound hypothetical machine god, and doing so by lowering themselves to the level of machine rather than raising machine to the level of man.
I’ll give one good example to make clear what I mean: (from facingthesingularity) But that can’t possibly be correct. The probability of Linda being a bank teller can’t be less than the probability of her being a bank teller and a feminist.
This is my “Humans are crazy” Exhibit A: The laws of probability theory dictate that as a story gets more complicated, and depends on the truth of more and more claims, its probability of being true decreases. But for humans, a story often seems more likely as it is embellished with details that paint a compelling story: “Linda can’t be just a bank teller; look at her! She majored in philosophy and participated in antinuclear demonstrations. She’s probably a feminist bank teller.”
But, the thing is, context informs us that while a philosophy major is unlikely to work for a bank, a feminist is much more likely to work a “pink collar job” such as secretarial work or as a bank teller, where they can use the state to monger for positions, pay and benefits above and beyond what they deserve. A woman who otherwise would have no interest in business or finance, when indoctrinated by the feminist movement, will leap to take crappy office jobs so they can raise their fists in the air in onionistic fashion against the horrible man-oppression they righteously defeated with their superior women intellects. The simple fact that “philosophy” in a modern school amounts to “The History of Philosophy”, and is utterly useless might also clue one in on the integrity or lack thereof that a person might have, although of course it isn’t conclusive.
In short, impressive “logical” arguments about how probabilities of complements must be additive can only be justified in a vacuum without context, a situation that does not exist in the real world.
None of that fog obscures the basic fact that the number of feminist female bank tellers cannot possibly be greater than the number of female bank tellers. The world is complex, but that does not mean that there are no simple truths about it. This is one of them.
People have thought up all manner of ways of exonerating people from the conjunction fallacy, but if you go back to Eliezer’s two posts about it, you will find some details of the experiments that have been conducted. His conclusion:
The conjunction error is an error, and people do make it.
I reread that section, and you are correct, given that they don’t tell you whether or not she is a feminist, it cannot be used as a criterion to determine whether or not she is a banker. However, I would say that the example, in typical public education style, is loaded and begs an incorrect answer. Since the only data you are given is insufficient to draw any conclusions, the participant is lead to speculate without understanding the limitations of the question.
As for “utility function”, there are at least three reasons why it is not just wrong, but entirely impossible.
1: Utility is heterogeneous. Which gives you more “utility”, a bowl of ice cream or a chair? The question itself is nonsensical, the quality/type of utility gained from a chair and a bowl of ice cream are entirely different.
2: Utility is complementary. If I own a field, the field by itself may be useless to me. Add a picnic table, and some food and suddenly the field gains utility beyond the food, table, and field individually. Perhaps I could run horses through the field, or add some labor and intelligent work and turn it into a garden, but the utility I get from it depends on my preferences (which may change) and the combination with other resources and a plan. Another example, a person who owns a yaht would probably get more “utility” out of going to the ocean than someone who does not.
3: Utility is marginal. For the first three scoops of ice cream, I’d say I get equal “utility” from each. The fourth scoop yields comparably less “utility” than the previous three, and by the fifth the utility becomes negative, as I feel sick afterwards. By six scoops I’m throwing away ice cream. On the other hand, if I have 99 horses, whether I gain or lose one would not make much difference as to the utility I get from them, but if I only have 2 horses, losing one could mean losing more than half of my utility. Different things have different useful quantities in different situations depending on how they are used.
4: Utility cannot be measured. This should be obvious. Even if we were to invent a magical brain scanner that could measure brain activity in high resolution in vivo, utility is not always the same for the same thing every time it is experienced, and you still have the apples-oranges problem that makes the comparison meaningless to begin with.
5: Human psychology is not a mere matter of using logic correctly or not. In this case, it is definitely a misapplication, but it seems the only psychology that gets any attention around here is anecdotes from college textbooks on decisions and some oversimplified mechanistic theorizing from neuroscience. You talk about anchoring like it’s some horrible disease, when it’s the same fundamental process required for memory and mastery of concepts. You’ve probably heard of dissociation but you probably wouldn’t believe me if I told you that memory can be flipped on and off like a light switch at the whim of your unconscious.
That aside, treating intelligence as a machine that optimizes things is missing the entire point of intelligence. If you had ever read Douglas Hofstadter’s “Godel Escher Bach”, or Christopher Alexander’s “The Nature of Order” series, you might have a greater appreciation for the role that abstract pattern recognition and metaphor plays in intelligence.
Finally, I read two “papers” from SI, and found them entirely unprofessional. They were both full of vague terminology and unjustified assertions and were written in a colloquial style that pretty much begs the reader to believe the crap they’re spewing. You get lots of special graphs showing how a superhuman AI would be something like two orders of magnitude more intelligent than humans, but no justification for how these machines will magically be able to produce the economic resources to reach that level of development “overnight”. Comparing modern “AIs” to mice is probably the most absurd fallacy I’ve seen thus far. Even the most sophisticated AI for driving cars cannot drive on a real road, its “intelligence” is overall still lacking in sophistication compared to a honey bee, and the equipment required to produce its rudimentary driving skills far outweigh the benefits. Computer hardware may improve regularly by Moore’s Law, but the field of AI research does not, and there is no evidence that we will see a jump in computer intelligence from below insects to above orangutans any time soon. When we do, it will probably take them 50-100 years to leave us fully at orangutan level.
You don’t understand what that term means.
This is false. Though currently there are situations that may come up that will prompt it to give up control to the human driver, and there are some situations (such as high reflectivity / packed snow) that they can’t handle yet.
It’s not nonsensical; it means “would you rather have a bowl of ice cream or a chair?” Of course the answer is “it depends”, but no-one ever claimed that U(x + a bowl of ice cream) − U(x) doesn’t depend on x.
To focus on one problem with this, you write:
Eliezer has read GEB and praised it above the mountains (literally). So a charitable reader of him and his colleagues might suppose that they know the point about pattern recognition, but do not see the connection that you find obvious. And in fact I don’t know what you’re responding to, or what you think your second quoted sentence has to do with the first, or what practical conclusion you draw from it through what argument. Perhaps you could spell it out in detail for us mortals?
Which two papers, by the way?
Every “context” can be described as a set of facts and parameters, AKA more data. Perfect data on the context means perfect information. Perfect information means perfect choice and perfect predictions. Sure, it might seem to you like the logical arguments expressed are “too basic to apply to the real world”, but a utility function is really only ever “wrong” when it fails to apply the correct utility to the correct element (“sorting out your priorities”), whether that’s by improper design, lack of self-awareness, missing information or some other hypothetical reason.
For every “no but theory doesn’t apply to the real world” or “theory and practice are different” argument, there is always an explanation for the proposed difference between theory and reality, and this explanation can be included in the theory. The point isn’t to throw out reality and use our own virtual-theoretical world. It’s to update our model (the theory) in the most sane and rational way, over and over again (constantly and continuously) so that we get better.
Likewise, maximizing one’s own utility function is not the reduce-oneself-to-machine-worshipper-of-the-machine-god that you seem to believe. I have emotions, I get angry, I get irritated (e.g. at your response*), I am happy, etc. Yet it appears that for several years, in hindsight, I’ve been maximizing my utility function without knowing that that’s how it’s called (and learning the terminology and more correct/formal ways of talking about it once I started reading LessWrong).
Your “utility function” is not one simple formula that you use to plug in values to variables, compute, and then call it a decision. The utility function of a person is the entire, general completeness of what that person wants and desires and values. If I tried to write down for you my own utility function, it would be both utterly incomprehensible and probably ridiculously ugly. That’s assuming I’d even be capable of writing it all down—limited self-awareness, biases, continuous change, and all that stuff.
To put it all in perspective, “maximizing one’s utility function” is very much equivalent to “according to what information you have, spend as much time as you think is worth taking deciding on the probably-best course of action available, and then act on it, such that in hindsight you’ll have maximized your chances of reaching your own objectives”. This doesn’t mean obtaining perfect information or never being wrong or worshipping a formula. It simply means living your own life, in your own way, with better (and improving) awareness of yourself and updating (changing) your own beliefs when they’re no longer correct so that you can act and behave more rationally. In this optic, LessWrong is essentially a large self-help group for normal people who just want to be better at knowing things and making decisions in general.
On a last note, FacingTheSingularity does not contain a bunch of scientific essays that would be the end answer to all singularity concerns. At best, it could be considered as one multi-chapter essay going through various points to support the primary thesis that the one author believes that the various experts are right about the Singularity being “imminent” (within this century at the outset). This is clearly stated on the front page, which is also the table of contents. As I’ve said in my previous reply, it’s a good vulgarized introduction. However, the real meat comes from the SingInst articles, essays and theses, as well as some of the more official stuff on LessWrong. Eliezer’s Timeless Decision Theory paper is a good example of more rigorous and technical writing, though it’s by far not the most relevant, nor do I think it’s the first one that a newcomer should read. If you’re interested in possible AI decision-making techniques, though, it’s a very interesting and pertinent reading.
*(I was slightly irritated that I failed to fully communicate my point and at the dismissal of long-thought-and-debated theories, including beliefs I’ve revalidated time and time again over the years, along with the childish comment on ex-christians and their “machine god”. This does not mean, however, that I transpose this irritation towards you or some other, unrelated outlet. My irritation is my own and a product of my own mental models.)
Edit: Fixed some of the text and added missing footnote.