Nice job writing the survey—fun times. I kind of want to hand it out to my non-LW friends, but I don’t want to corrupt the data.
aspera
Nice job on the survey. I loved the cooperate/defect problem, with calibration questions.
I defected, since a quick expected value calculation makes it the overwhelmingly obvious choice (assuming no communcation between players, which I am explicitly violating right now). Judging from comments, it looks like my calibration lower bound is going to be way off.
On Writing Well, by William Zinsser
Every word should do useful work. Avoid cliché. Edit extensively. Don’t worry about people liking it. There is more to write about than you think.
The plots were done in Mathematica 9, and then I added the annotations in PowerPoint, including the dashed lines. I had to combine two color functions for the density plot, since I wanted to highlight the fact that the line s=n represented indifference. Here’s the code:
r = 1; ua = 1;ub = −1; f1[n, s] := (ns—s^2r ) (ua—ub); Show[DensityPlot[-f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction → “CherryTones”, Frame → False, PlotRange → {-1000, 0}], DensityPlot[f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction → “BeachColors”, Frame → False, PlotRange → {-1000, 0}]]
I jest, but the sense of the question is serious. I really do want to teach the people I’m close to how to get started on rationality, and I recognize that I’m not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?
My parents stopped me from skipping a grade, and apart from a few math tricks, we didn’t work on additional material at home. I fell into a trap of “minimum effort for maximum grade,” and got really good at guessing the teacher’s password. The story didn’t change until graduate school, when I was unable to meet the minimum requirements without working, and that eventually led me to seek out fun challenges on my own.
I now have a young son of my own, and will not make the same mistake. I’m going to make sure he expects to fail sometimes, and that I praise his efforts to go beyond what’s required. No idea if it will work.
I think it would be possible to have an anti-Occam prior if the total complexity of the universe is bounded.
Suppose we list integers according to an unknown rule, and we favor rules with high complexity. Given the problem statement, we should take an anti-Occam prior to determine the rule given the list of integers. It doesn’t diverge because the list has finite length, so the complexity is bounded.
Scaling up, the universe presumably has a finite number of possible configurations given any prior information. If we additionally had information that led us to take an Anti-Occam prior, it would not diverge.
You can’t remember whether or not bleggs exist in real life.
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
Here is some clarification from Zinsser himself (ibid.):
“Who am I writing for? It’s a fundamental question, and it has a fundamental answer: You’re writing for yourself. Don’t try to visualize the great mass audience. There is no such audience—every reader is a different person.
This may seem to be a paradox. Earlier I warned that the reader is… impatient… . Now I’m saying you must write for yourself and not be gnawed by worry over whether the reader is tagging along. I’m talking about two different issues. One is craft, the other is attitude. The first is a question of mastering a precise skill. The second is a question of how you use the skill to express your personality.
In terms of craft, there’s no excuse for losing readers through sloppy workmanship. … But on the larger issue of whether the reader likes you, or likes what you are saying or how you are saying it, or agrees with it, or feels an affinity for your sense of humor or your vision of life, don’t give him a moment’s worry. You are who you are, he is who he is, and either you’ll get along or you won’t.
N.B: These paragraphs are not contiguous in the original text.
I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can’t predict what’s meant to happen. In that case, he’s equally good at predicting any outcome, so it’s a perfectly uninformative hypothesis.
Occam’s Razor is non-Bayesian? Correct me if I’m wrong, but I thought it falls naturally out of Bayesian model comparison, from the normalization factors, or “Occam factors.” As I remember, the argument is something like: given two models with independent parameters {A} and {A,B}, the P(AB model) \propto P(AB are correct) and P(A model) \propto P(A is correct). Then P(AB model) ⇐ P(A model).
Even if the argument is wrong, I think the result ends up being that more plausible models tend to have fewer independent parameters.
I added a section called “Deciding how to decide” that (hopefully) deals with this issue appropriately. I also amended the conclusion, and added you as an acknowledgement.
Ok, I think I’ve got it. I’m not familiar with VNM utility, and I’ll make sure to educate myself.
I’m going to edit the post to reflect this issue, but it may take me some time. It is clear (now that you point it out) that we can think of the ill-posedness coming from our insistence that the solution conform to aggregative utilitarianism, and it may be possible to sidestep the paradox if we choose another paradigm of decision theory. Still, I think it’s worth working as an example, because, as you say, AU is a good general standard, and many readers will be familiar with it. At the minimum, this would be an interesting finite AU decision problem.
Thanks for all the time you’ve put into this.
I would like to include this issue in the post, but I want to make sure I understand it first. Tell me if this is right:
It is possible mathematically to represent a countably infinite number of immortal people, as well as the process of moving them between spheres. Further, we should not expect a priori that a problem involving such infinities would have a solution equivalent to those solutions reached by taking infinite limits of an analogous finite problem. Some confusion arises when we introduce the concept of “utility” to determine which of the two choices is better, since utility only serves as a basis on which to make decision for finite problems.
If that’s what you’re saying, I have a couple of questions.
Do you view the paradox as therefore unresolvable as stated, or would you claim that a different resolution is correct?
If I carefully restricted my claim about ill-posedness to the question of which choice is better from a utilitarian sense, would you agree with it?
Fixed. Thanks for reading so closely. It’s amazing how many little mistakes can survive after 10 read-throughs.
Great problem, thanks for mentioning it!
I think the answer to “how many balls did you put in the vase as T->\infty” and “How many balls have been destroyed as T->\infty” both have well defined answers. It’s just a fallacy to assume that the “total number of balls in the vase as T->\infty” is equal to the difference between these quantities in their limits.
In the above example, the number of people and the number of days they live were uncountable, if I’m not mistaken. The take-home message is that you do not get an answer if you just evaluate the problem for sets like that, but you might if you take a limit.
Conclusions that involve infinity don’t map uniquely on to finite solutions because they don’t supply enough information. Above, “infinite immortal people” refers to a concept that encapsulates three different answers. We had to invent a new parameter, alpha, which was not supplied in the original problem, to come up with a well defined result. In essence, we didn’t actually answer the question. We made up our own problem that was similar to the original one.
The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don’t use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That’s perfectly OK—but then you have to choose where the jumps are.
My mother’s husband professes to believe that our actions have no control over the way in which we die, but that “if you’re meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!” for example.
After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you “just can’t apply numbers to this,” and “Well, you shouldn’t tempt fate.”
My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?