Sorry, I was unclear. I meant that the causal structure where the equations of physics cause the outcome of the experiment falls out of the Pearlean causal math, not that the outcome of the experiment falls out of the physical math (though the latter is of course also true).
endoself
I guess this depends on how willing you are to bite the bullet on the mugging. I’m rather uncertain, as I don’t trust my intuition to deal with large numbers properly, but I also don’t trust math that behaves the way de Blanc describes.
If you actually accept that small probabilities of huge utilities are important and you try to consider an actual decision, you run into the informal version of this right away; when the mugger asks you for $5 dollars in exchange for 3^^^3 utilons, you consider the probability that you can persuade the mugger to give you even more utility and the probability that there is another mugger just around the corner who will offer you 4^^^4 utilons if you just offer them your last $5 rather than giving it to this mugger. This explosion of possibilies is basically the same thing described in the paper.
Has this de Blanc proof been properly discussed in a main post yet?
Not that I can find. I did write a comment that is suitable for at least some non-mathematicians, which I could expand into a post and make clearer/more introductory. However, some people didn’t take it very well, so I am very reluctant to do so. If you want to read my explanation, even with that caveat, you can click upward from the linked comment.
Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?
I’m not sure. In the Sequences, he thinks there are unsolved problems relating to unbounded utility functions and he states that he feels confused about such things, such as in The Lifespan Dilemma. I don’t know how his thoughts have changed since then.
That’s a physcial truth, and a local one at that.
An event can have more than one cause. My uncertainty about the value of some variable in an equation is related to my uncertainty about the outcome of an experiment in exactly the way that makes Pearlean methods tell me that both the value of t in the equation and the physical truth that g ≈ 10 m/s^2 are causes of the amount of time that the object takes to fall. This is just a fact about my state of uncertainty that falls directly out of the math.
Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences… in which case I think they should bite the bullet on the mugging
Even this doesn’t make the problem go away as, given the Solomonoff prior, the expected utility under most sensible unbounded utitily functions fails to converge. (A nonnegligible fraction of my LW comments direct people to http://arxiv.org/abs/0712.4318)
Then it might be that the standard natural numbers are the unique minimal element in this inclusion relationship.
Why would we care about the smallest model? Then, we’d end up doing weird things like rejecting the axiom of choice in order to end up with fewer sets. Set theorists often actually do the opposite.
Well, yeah, we can taboo ‘truth’. You are still using the titular “useful idea” though by quantifying over propositions and making this correspondence. The idea that there are these things that are propositions and that they can appear both in quotation marks and also appear unquoted, directly in our map, is a useful piece of understanding to have.
What some here might call The Superinteligent Will but I see as a logical outgrowth of der Wille zur Macht, is to stamp your values on a uncaring universe.
Is this why people like Nietzsche, or do most people who like Nietzsche have different reasons?
The reason I think this is that I suspect most of those who are firmly against it don’t know or understand the arguments for it or they are using “moral realism” in a way that is different from how philosophers use it.
This is pretty likely. I spent about a minute trying to determine what the words were actually supposed to mean, then decided that it was pointless, gave up, and refrained from voting on that question. (I did this for a few questions, though I did vote on some, then gave up on the poll.)
I can’t even formulate a “rationalist” argument against that wisdom, besides some vague guesses that principles of social organization and grand-scale value conflict like farmers vs. foragers—what LW likes to dismiss as “politics”—might stay important after we handle FAI, death or scarcity.
Trying to rationalize something like this is much worse in the long run. Intentionally acting irrationally is much better than acting the same way, but believing it to be rational.
This advice is more useful on the meta-level of having a community norm of accepting people who do this. (Do we already have such a norm? I know that I act as if such a norm exists, but I am unsure if others do.) The good thing about Less Wrongers is that you can get them to adopt such a norm just by explaining why it would be useful.
John Taylor Gatto won the New York State teacher of the year award in 1991 (New York state’s education website). His ambition to be a great teacher led him to the realization that the system itself is broken and he was so disgusted with it that he resigned.
This pattern matches to the standard failure mode where exceptional individuals assume that others are more like them and therefore more competent than they actually are. This causes them to conclude that institutions are more flawed than they actually are.
I agree. I was shocked that a prominent anthroplogist would be so essentialist about the concept of ‘species’.
What I don’t want is a result that says “Enter into a contract with 99 other people that none of you donates $100 to puppies unless all of you do, then donate $100, so you can feel like you caused puppies to get $10000!”. Somehow that seems counterproductive. Except as a one-off game for funsies, which doesn’t count. :)
Actually, this isn’t wrong, as long as you think about it the right way. First you are causing puppies to get $10000, but you are also causing 99 people to lose $100, so you have to account for that.
More importantly, though, frame the scenario this way: 99 people already signed the contract, and you still have to decide whether to sign it. Then, you are clearly making the entire difference and you should be willing to accept correspondingly large disutilities if they are necessary to get the deal to happen (unless they could easily find someone else, in which case both the math and the intuition agree that you are not creating as many utilions). Note that the math cannot require everyone to accept individually large disutilities, because then signing the contract would cause all of those disutilies to occur.
If, however they have not signed anything yet, either you know that they are going to and that they cannot find anyone elso to be the 100th person, in which case this is equivalent to the other scenario, or you don’t know whether they are all going to sign it in which case the utility is reduced by the uncertainty and you should no longer accept as large disutilities to sign the contract.
Okay, so “how should I feel” means “what is the utility of this scenario” in this context? In that case, you should use the full values, rather than discounting for things that were partially ‘someone else’s responsibility’. If you prefer world-state A to world-state B, you should act so as to cause A to obtain rather than B. The fact that A gets a tag saying “this wasn’t all me—someone helped with this” doesn’t make the difference in utilities any smaller.
Indeed, and moreover they cancel each other out.
They don’t exactly cancel out. I think that brains tend to use “these things cancel out” as an excuse to do less thinking.
Viscerally, how should I feel about donating $100 to puppies**?
Why do you expect there to be standard math for this? This seems like it’s up to your utility function and the psychology of motivation.
Anyway, Staurt Armstrong summarized a lot of what’s known about bargaining problems. I think the Nash bargaining solution is better than the other ideas he describes, for reasons that are complicated to explain.
As far as I know, there’s no general solution for more than two players yet. We do know that any Pareto optimum must correspond to maximizing a weighted sum of the agents’ utility functions, but that isn’t much help; the whole point of bargaining is to choose which Pareto optimum will be selected, and knowing that there is some weighting that would give the right answer doesn’t tell us which one it is. If you look at the proof of the fact that any Pareto optimum must correspond to a weighted sum of the utility functions, you can see that the solution is, in some sense, more fundamental than the weights, and that trying to reduce the problem of picking a solution to one of picking weights is not a promising angle of attack here.
The usual way that I dispel the illusion of a ‘reference class’ based on something like sentience (as opposed to something sensible like the class of beings making the same observations as you) is by asking what inferences a nonsentient AI should make, but of course that line of argument won’t convince Mitchell.
I’m not quite sure what you’re saying, but I think I agree with this. I guess I’m trying to flesh out the idea of “some process we don’t understand”, since Robin’s model seems to depend on it (as do things like Moore’s law, which is more strongly supported by the data).
If we do assume universality, counterfactual resiliency is still a useful method of analysis, and we can even further clarify the reasons why by pointing out that models of universal behaviour usually involve the aggregation of many small, mostly independent effects. However, some evidence against counterfactual resilience is weakened. For example, we could take the counterfactual that the Greeks had an industrial revolution, but we might not actually know how plausible that is. Models like Robin’s that conjecture universality would predict that there were reasons that that couldn’t have happened, so we need to be more familiar with the data before a claimed instance of nonresilience can really be considered good evidence against the model. Thus, the idea of universality allows us to more accurately evaluate the strength of arguments of this sort.
I don’t want to train readers to unhide things by default just because they might miss intelligent conversation in subthreads
Another way of doing this would be a five second delay to unhide hidden comments. Waiting isn’t fun and it prevents hyperbolic discounting from magnifying the positive reinforcement of reading something that someone doesn’t want you to read.
I’m not sure what you mean. Pearlean causality, as I understand it, is about maps. You put in a subjective probability distribution and a few assumptions and a causal structure comes out.