Nice job writing the survey—fun times. I kind of want to hand it out to my non-LW friends, but I don’t want to corrupt the data.
aspera
On the importance of taking limits: Infinite Spheres of Utility
Nice job on the survey. I loved the cooperate/defect problem, with calibration questions.
I defected, since a quick expected value calculation makes it the overwhelmingly obvious choice (assuming no communcation between players, which I am explicitly violating right now). Judging from comments, it looks like my calibration lower bound is going to be way off.
On Writing Well, by William Zinsser
Every word should do useful work. Avoid cliché. Edit extensively. Don’t worry about people liking it. There is more to write about than you think.
The plots were done in Mathematica 9, and then I added the annotations in PowerPoint, including the dashed lines. I had to combine two color functions for the density plot, since I wanted to highlight the fact that the line s=n represented indifference. Here’s the code:
r = 1; ua = 1;ub = −1; f1[n, s] := (ns—s^2r ) (ua—ub); Show[DensityPlot[-f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction → “CherryTones”, Frame → False, PlotRange → {-1000, 0}], DensityPlot[f1[n, s], {n, 0, 20}, {s, 0, 20}, ColorFunction → “BeachColors”, Frame → False, PlotRange → {-1000, 0}]]
I jest, but the sense of the question is serious. I really do want to teach the people I’m close to how to get started on rationality, and I recognize that I’m not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?
My parents stopped me from skipping a grade, and apart from a few math tricks, we didn’t work on additional material at home. I fell into a trap of “minimum effort for maximum grade,” and got really good at guessing the teacher’s password. The story didn’t change until graduate school, when I was unable to meet the minimum requirements without working, and that eventually led me to seek out fun challenges on my own.
I now have a young son of my own, and will not make the same mistake. I’m going to make sure he expects to fail sometimes, and that I praise his efforts to go beyond what’s required. No idea if it will work.
I think it would be possible to have an anti-Occam prior if the total complexity of the universe is bounded.
Suppose we list integers according to an unknown rule, and we favor rules with high complexity. Given the problem statement, we should take an anti-Occam prior to determine the rule given the list of integers. It doesn’t diverge because the list has finite length, so the complexity is bounded.
Scaling up, the universe presumably has a finite number of possible configurations given any prior information. If we additionally had information that led us to take an Anti-Occam prior, it would not diverge.
You can’t remember whether or not bleggs exist in real life.
My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
Here is some clarification from Zinsser himself (ibid.):
“Who am I writing for? It’s a fundamental question, and it has a fundamental answer: You’re writing for yourself. Don’t try to visualize the great mass audience. There is no such audience—every reader is a different person.
This may seem to be a paradox. Earlier I warned that the reader is… impatient… . Now I’m saying you must write for yourself and not be gnawed by worry over whether the reader is tagging along. I’m talking about two different issues. One is craft, the other is attitude. The first is a question of mastering a precise skill. The second is a question of how you use the skill to express your personality.
In terms of craft, there’s no excuse for losing readers through sloppy workmanship. … But on the larger issue of whether the reader likes you, or likes what you are saying or how you are saying it, or agrees with it, or feels an affinity for your sense of humor or your vision of life, don’t give him a moment’s worry. You are who you are, he is who he is, and either you’ll get along or you won’t.
N.B: These paragraphs are not contiguous in the original text.
I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can’t predict what’s meant to happen. In that case, he’s equally good at predicting any outcome, so it’s a perfectly uninformative hypothesis.
Occam’s Razor is non-Bayesian? Correct me if I’m wrong, but I thought it falls naturally out of Bayesian model comparison, from the normalization factors, or “Occam factors.” As I remember, the argument is something like: given two models with independent parameters {A} and {A,B}, the P(AB model) \propto P(AB are correct) and P(A model) \propto P(A is correct). Then P(AB model) ⇐ P(A model).
Even if the argument is wrong, I think the result ends up being that more plausible models tend to have fewer independent parameters.
Meetup : Champaign, IL meetup
Meetup : Meetup, Champaign IL,
Meetup : Weekly meetup, Champaign IL: Cafe Paradiso
The Sleeping Beauty problem and transformation invariances
I added a section called “Deciding how to decide” that (hopefully) deals with this issue appropriately. I also amended the conclusion, and added you as an acknowledgement.
Ok, I think I’ve got it. I’m not familiar with VNM utility, and I’ll make sure to educate myself.
I’m going to edit the post to reflect this issue, but it may take me some time. It is clear (now that you point it out) that we can think of the ill-posedness coming from our insistence that the solution conform to aggregative utilitarianism, and it may be possible to sidestep the paradox if we choose another paradigm of decision theory. Still, I think it’s worth working as an example, because, as you say, AU is a good general standard, and many readers will be familiar with it. At the minimum, this would be an interesting finite AU decision problem.
Thanks for all the time you’ve put into this.
My mother’s husband professes to believe that our actions have no control over the way in which we die, but that “if you’re meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!” for example.
After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you “just can’t apply numbers to this,” and “Well, you shouldn’t tempt fate.”
My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?