Yes. For myself, I already subscribed to that philosophy (though am happy to see it written down in a form more eloquent than I could’ve expressed myself) - your OB posts that come to mind from which I learned something I didn’t previously know would be the excellent series on quantum mechanics. But that’s not relevant to most people (honestly, quantum mechanics isn’t of practical relevance to me either, though it is intellectually interesting). Tsuyoku Naritai is in my opinion the one thing from which most people would derive most benefit.
rwallace
These are good suggestions, though if you are going to print “0 and 1 are Not Probabilities” (which makes a coherent argument even though I disagree with it), I would suggest also printing the post where you caution people against putting the label “probability estimate” on brown numbers.
I’ve had to consciously adjust my reactions on this sort of thing a few times, by reminding myself that the amount I should care about saving 1 euro on a product should not depend on the total price—but only and specifically on how frequently I will buy the product.
Put another way: it helps to have the right formula to replace the wrong one.
Compile a large enough database of historical events that nobody could memorize more than a fraction of it. For the test, choose a few events at random, describe the initial conditions and ask the candidate to predict the outcomes.
That still has the problem that it doesn’t test for lack of bias, but for having bias that matches that of the people who wrote the stories. I suggest instead using real cases—and not taken from the media, because that means selection bias, but taking all the cases from the files of a particular police department during a particular span of time.
Point. Still, we’ve been recording lots of different kinds of events for a long time. Off the top of my head, other kinds of historical data that could be useful here:
Medical cases, minor scientific controversies, engineering projects, battles, the stock market, markets in general, expeditions.
The instrumental value of science is that scientific progress or lack thereof is what will make the difference between a cosmos of life and mind, versus a cosmos of dead matter in which mind was a transient blip on a single dust speck. It’s not that it matters which scientist gets there first—it’s that it matters whether we get far enough within the time we have.
Well yes—that’s the point of fiction, it’s an ingredient of the miracle by which civilization is built from killer apes.
Presumably the minority of people who for whatever reason strongly feel this way (whether rightly or wrongly), are the most likely to self-identify as rationalists.
This is a case where a modern (or even science fictional) problem can be solved with a piece of technology that was known to the builders of the pyramids.
The technology in question is the promise. If the overall deal is worth while then the solution is for me to agree to it upfront. After that I don’t have to do any more utility calculations; I simply follow through on my agreement.
Recall, however, that the objective is not to be someone who would do well in fictional game theory scenarios, but someone who does well in real life.
So one answer is that real life people don’t suddenly emigrate to a distant galaxy after one transaction.
But the deeper answer is that it’s not just the negative consequences of breaking one promise, but of being someone who has a policy of breaking promises whenever it superficially appears useful.
Dublin, Ireland.
+1 for “Rationalists win”. What is Parfit’s Hitchhiker? I couldn’t find an answer on Google.
Ah, thanks. I’m of the school of thought that says it is rational both to promise to pay the $100, and to have a policy of keeping promises.
My answer to that one is that I don’t play chicken in the first place unless the stake is something I’m prepared to die for.
My answer still applies—I’m not going to make a song and dance about who does it, unless the other guy has been systematically not pulling his weight and it’s got to the point where that matters more to me than this task getting done.
Simple models are fine as long as we don’t forget they are only approximations. Rationalists should win in the real world.
I think the best approach is a slightly more sophisticated one: commit to the belief that there is a way to succeed and you will find it—but not necessarily that you have already found it.
I would imagine it should be possible to freeze your brain and donate the rest of your organs?
Tsuyoku Naritai.