I really fail to see why you’re all so fascinated by Newcomb-like problems. When you break causality, all logic based on causality doesn’t function any more. If you try to model it mathematically, you will get inconsistent model always.
taw
You cannot do that without breaking Rice’s theorem. If you assume you can find out the answer from someone else’s source code → instant contradiction.
You cannot work around Rice’s theorem or around causality by specifying 50.5% accuracy independently of modeled system, any accuracy higher than 50%+epsilon is equivalent to indefinitely good accuracy by repeatedly predicting (standard cryptographic result), and 50%+epsilon doesn’t cause the paradox.
Give me one serious math model of Newcomb-like problems where the paradox emerges while preserving causality. Here are some examples. Then you model it, you either get trivial solution to one-box, or causality break, or omega loses.
You decide first what you would do in every situation, omega decides second, and now you only implement your initial decision table and are not allowed to switch. Game theory says you should implement one-boxing.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. Game theory says you should precommit to one-box, then implement two-boxing, omega loses.
You decide first what you would do in every situation, omega decides second, and now you are allowed to switch. If omega always decides correctly, then he bases his decision on your switch, which either turns it into model #1 (you cannot really switch, precommitment is binding), or breaks causality.
Death Note is a brilliant anime, but not really a great of an example of rationality. Tvtropes calls it Xanatos Roulette.
First you start with a smart plan. That can be rational. Then you complicate the plan. It makes characters look even smarter, and still quite rational. At some point the plan is so overcomplicated, so many uncertainties are just assumed, that it’s no longer rationality but plain omniscience and characters “knowing the script of future episodes”. That’s what Death Note is. Light and L overplot, and it’s really fun to watch, and they look really “smart” when it’s well done, but it’s way past any reasonable pretense of rationality.
TvTropes has more examples, like Saw series. They’re all great fun, and not much rational.
And the argument that omega just needs predictive power of 50.5% to cause the paradox only works if it works against ANY arbitrary algorithm. Having that power against any arbitrary algorithm breaks Rice’s Theorem, having that power (or even 100%) against just limited subset of algorithms doesn’t cause the paradox.
If you take strict decision tree precommitment interpretation, then you fix causality. You decide first, omega decides second, game theory says one-box, problem solved.
Decision tree precommitment is never a problem in game theory, as precommitment of the entire tree commutes with decisions by other agents:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X)
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X)
are identical, as B cannot decide based on f. So the changing your mind problem never occurs.
With omega:
A decides what f(X), f(Y) to do if B does X or Y. B does X. A does f(X) - B can answer depending on f
B does X. A decides what f(X), f(Y) to do if B does X or Y. A does f(X) - somehow not allowed any more
I don’t think the paradox exist in any plausible mathematization of the problem. It looks to me like another of those philosophical problems that exist because of sloppiness of natural language and very little more, I’m just surprised that OB/LW crowd cares about this one and not about others. OK, I admit I really enjoyed it the first time I saw it but just as something fun, nothing more than that.
I find this plausible but not too reliable, a non-Bayesian way would be to say “positive point estimate, not statistically significant”, I’m not sure what’s the nice Bayesian way of saying that.
I agree with what you’re proposing, I also enjoyed comments which go meta about it. I hope this becomes a common practice on OB/LW at least. Not just for the sake of arguments which we agree on, but also to make occasional genuine total disagreement stand out more strongly against the usual nitpicking background.
I blame social status. Well, I blame social status and other primate tribal psychology for most biases people have. You’re basically accepting Eliezer as your personal guru and tribal leader, and following him mindlessly, especially when others seem to be doing so too. This worked great when you tried to get your group into power in a tribe, it’s a pretty stupid thing to do these days.
It doesn’t matter that Paul Graham and Stallman don’t allow comments. People know them, they have very high reputation, and plenty of fanboys, that all makes them high social status individuals. Mindlessly following the leader is not the same as mindlessly following the group, both are real and distinct behaviours.
People feel differently reading something by Paul Graham and something by blogger they’ve never heard about. You might have gotten so used to social status indicators you don’t consciously see them. Go to 4chan (not /b/), see what discussion is like without them. It is actually surprisingly good.
There are some good reasons for being terrified. We are tribal animals. We don’t really care about the truth as such, but we care a lot about tribal politics. We can pursue truth when we have very high degree of disinterest in what the truth will be, but that’s a really exceptional situation. When we care about the shape of truth, we lose a lot of rationality points, and tribal politics forces make us care a lot about things different than truth. It might be our strongest instinct, even stronger than individual survival or sex drive.
I agree with you that it has its downsides, but I really don’t see how you can accept the politics and stay rational. I cannot think of many examples of that.
I’m also really disappointed by so many status indicators all over lesswrong—top contributors on every page. Your social status points (karma) on every page. User names and points on absolutely everything. Vote up / vote down. You might think we’re doing fine, but so was reddit when it was tiny, let’s see how it scales up. I think we should get rid of as much of that as we can. reddit’s quality of discussion is a lot lower than 4chan’s, even though it’s much smaller.
And this is a great example of what you once posted about—different people are annoyed by different biases. You seem to think social status and politics are mostly harmless and may even be useful, I think it’s the worst poison for clear rational thinking, and I haven’t seen many convincing examples of it being useful.
Word “cult” seems to be used in very vague sense by everyone, and people have different definitions. Here’s something I wrote about Paul Graham’s and a few other “cults”. It’s only vaguely relevant, as I used the label “cult” differently.
If you are not into Paul Graham’s cult / meme complex, and you hear people who really are—talking how working 100 hours a week on built to sell startup is the best way to prove your worth as a hacker and a human being—they really sound like “cult” members.
Is there even a useful distinction between beginner and advanced material? Perhaps if something required heavy math background. But in this interpretation examples like Newcomb problems are actually anti-advanced because they collapse into triviality or nonsense when you try to apply any mathematics to them.
And we can crosslink as much as Eliezer did on OB. I like this about his style.
How likely is it to be a result of genuine reasoning leading to this conclusion, and how likely is it to be just a rationalization of the yuck factor? It seems pretty straightforward.
Does “Vote down” on LW mean “not interesting enough to go to the front page”? Because that’s how I feel about this. On the other hand on Reddit “Vote down” tends to mean “Doesn’t agree with the groupthink”, so I’m very reluctant to use it.
Maybe it shouldn’t but on reddit, and before than on slashdot, and everywhere else I’ve seen that’s how it ended up being used. Up = agree, Down = disagree. Now I want my time back, so down.
The hope is that we’ll be able to avoid this. For myself, I’m in the habit of upvoting well-argued comments that I nevertheless disagree with.
I would like to believe that’s what I’m doing, and I think I’m fooling myself. It’s enough if our thresholds for up/downvote are different for comments we agree and disagree, something like:
Somewhat annoying comment you agree = ignore
Somewhat annoying comment you disagree = downvote
Someone smart comment you agree = upvote
Somewhat smart comment you disagree = ignore
As most comments are in this not completely brilliant and not complete rubbish category, this is quite close to upvote on agree, downvote on disagree.
I don’t agree with anything about your post, from assumptions to conclusions.
I’d say it’s highly irrational money to give to any charitable cause. As far as I can tell most charities have laudable goals and don’t even keep track record of meeting them. The best they can tell is that they actually spent some high percent of their money taken on some efforts vaguely related to the goal, not on the most cost effective means of meeting their goals. That’s assuming we know what goals to donate to, what’s not really true.
Well, I know for sure that an extremely effective way of helping poor people of the world (one of the most popular targets one way or another) is selfishly trading with them. That’s what I do, I buy cheap sweatshop-produced stuff. And it probably helps them more than I would by sending them money.
If there was suddenly an extra pound in the world and I had to decide best use for it—I would use it on myself. Seriously. And so by marginal reasoning I don’t donate a single pound to any charity. I don’t need it to feel good, and that’s really what charities are about. Not about any causes.
Also—I don’t know any Catholic who gives 10% of their income to charities, including the Catholic Church. Where did you come up with absurdly exaggerated figure like that?
Considering two completely arbitrary numbers (45%, 10%), taking percentage of Protestant relative to believers not population, very vague concept of “Marxism” (all social democratic parties are somewhat Marx-influenced, and you only picked those), your grouping together East and West Germany, even through in this context they’re very much not, and that most variables in Europe tend to be somewhat continuous geographically … You have so many free variables to manipulate here you can make it perfectly align with popularity of Pokemon vs Dora the Explorer, or pop vs techno.
I never give to any charity, all information I have makes me strongly believe that increasing per capita income is the best long term solution to most of the problems. I believe there are a few exceptions, polio eradication effort comes to mind, that can give important and lasting results for little money, but I have seen very little research supporting any of them.
So GiveWell gets my vote, even if not my money—no rationalist should be giving any money to any charity that doesn’t publish its effectiveness data.
I voted up as it was amusing even long after it became obvious, which was around number of shrines per household.
A software request. Can we get #-links to footnotes? I wanted to tweet footnote 2, but it doesn’t have any anchor. Or put it into a separate post, as it’s awesome.