Oh. I totally misunderstood the first time I read that comment, sorry!
If anyone else had trouble: Kindly is pointing out that a problem very similar to the Mugging exists even if you don’t have an actual mugger, i.e., someone who’s going around telling people to give them $5. For example, the existence of monotheistic religion is weak Bayesian evidence for an omnipotent god who will torture people for actual eternity if they steal from the cookie jar; does it follow that an FAI should destroy all cookie jars? (Well, it doesn’t, because there’s no reason to think that this particular hypothesis is what would dominate the FAI’s actions, but something would.)
Which is actually a very good point, because the original statement of the problem triggered our anti-cheating adaptations (must avoid being exploitable by a clever other tribemember that way), but the problem doesn’t seem to go away if you remove that aspect. It doesn’t even seem to need a sentient matrix lord, really. -- Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences… in which case I think they should bite the bullet on the mugging, since the causal reason they don’t is obviously(?) that evolution built them with anti-cheating adaptations, which doesn’t seem a good reason in the large scheme of things.
Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences… in which case I think they should bite the bullet on the mugging
Even this doesn’t make the problem go away as, given the Solomonoff prior, the expected utility under most sensible unbounded utitily functions fails to converge. (A nonnegligible fraction of my LW comments direct people to http://arxiv.org/abs/0712.4318)
That’s an important problem, but to me it doesn’t feel like the same problem as the mugging—would you agree, or if not, could you explain how they’re somehow the same problem in disguise?
I guess this depends on how willing you are to bite the bullet on the mugging. I’m rather uncertain, as I don’t trust my intuition to deal with large numbers properly, but I also don’t trust math that behaves the way de Blanc describes.
If you actually accept that small probabilities of huge utilities are important and you try to consider an actual decision, you run into the informal version of this right away; when the mugger asks you for $5 dollars in exchange for 3^^^3 utilons, you consider the probability that you can persuade the mugger to give you even more utility and the probability that there is another mugger just around the corner who will offer you 4^^^4 utilons if you just offer them your last $5 rather than giving it to this mugger. This explosion of possibilies is basically the same thing described in the paper.
Has this de Blanc proof been properly discussed in a main post yet? (For instance, has anyone managed to get across the gist of de Blanc’s argument in a way suitable for non-mathematicians? I saw there is a paragraph in the Lesswrongwiki, but not a reference to main articles on this subject.)
Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?
Has this de Blanc proof been properly discussed in a main post yet?
Not that I can find. I did write a comment that is suitable for at least some non-mathematicians, which I could expand into a post and make clearer/more introductory. However, some people didn’t take it very well, so I am very reluctant to do so. If you want to read my explanation, even with that caveat, you can click upward from the linked comment.
Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?
I’m not sure. In the Sequences, he thinks there are unsolved problems relating to unbounded utility functions and he states that he feels confused about such things, such as in The Lifespan Dilemma. I don’t know how his thoughts have changed since then.
Oh. I totally misunderstood the first time I read that comment, sorry!
If anyone else had trouble: Kindly is pointing out that a problem very similar to the Mugging exists even if you don’t have an actual mugger, i.e., someone who’s going around telling people to give them $5. For example, the existence of monotheistic religion is weak Bayesian evidence for an omnipotent god who will torture people for actual eternity if they steal from the cookie jar; does it follow that an FAI should destroy all cookie jars? (Well, it doesn’t, because there’s no reason to think that this particular hypothesis is what would dominate the FAI’s actions, but something would.)
Which is actually a very good point, because the original statement of the problem triggered our anti-cheating adaptations (must avoid being exploitable by a clever other tribemember that way), but the problem doesn’t seem to go away if you remove that aspect. It doesn’t even seem to need a sentient matrix lord, really. -- Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences… in which case I think they should bite the bullet on the mugging, since the causal reason they don’t is obviously(?) that evolution built them with anti-cheating adaptations, which doesn’t seem a good reason in the large scheme of things.
Even this doesn’t make the problem go away as, given the Solomonoff prior, the expected utility under most sensible unbounded utitily functions fails to converge. (A nonnegligible fraction of my LW comments direct people to http://arxiv.org/abs/0712.4318)
That’s an important problem, but to me it doesn’t feel like the same problem as the mugging—would you agree, or if not, could you explain how they’re somehow the same problem in disguise?
I guess this depends on how willing you are to bite the bullet on the mugging. I’m rather uncertain, as I don’t trust my intuition to deal with large numbers properly, but I also don’t trust math that behaves the way de Blanc describes.
If you actually accept that small probabilities of huge utilities are important and you try to consider an actual decision, you run into the informal version of this right away; when the mugger asks you for $5 dollars in exchange for 3^^^3 utilons, you consider the probability that you can persuade the mugger to give you even more utility and the probability that there is another mugger just around the corner who will offer you 4^^^4 utilons if you just offer them your last $5 rather than giving it to this mugger. This explosion of possibilies is basically the same thing described in the paper.
Has this de Blanc proof been properly discussed in a main post yet? (For instance, has anyone managed to get across the gist of de Blanc’s argument in a way suitable for non-mathematicians? I saw there is a paragraph in the Lesswrongwiki, but not a reference to main articles on this subject.)
Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?
Not that I can find. I did write a comment that is suitable for at least some non-mathematicians, which I could expand into a post and make clearer/more introductory. However, some people didn’t take it very well, so I am very reluctant to do so. If you want to read my explanation, even with that caveat, you can click upward from the linked comment.
I’m not sure. In the Sequences, he thinks there are unsolved problems relating to unbounded utility functions and he states that he feels confused about such things, such as in The Lifespan Dilemma. I don’t know how his thoughts have changed since then.