Hello everyone
I’ve been lurking here for a while now but I thought it was about time I said “Hi”.
I found Less Wrong through HPMOR, which I read even though I never read Rowling’s books.
I’m currently working my way through the Sequences at a few a day. I’m about 30% through the 2006-2010 collection, and I can heartily recommend reading them in time order and on something like Kindle on your iPhone. ciphergoth’s version suited me quite well. I’ve been making notes as I go along and sooner or later there’ll be a huge set of comments and queries arriving all at once.
I have a long standing love of expressing my beliefs with respect to probability but reading through those first sequences has really sharpened my appreciation for the art.
I’ve been reading quite a lot of papers recently and had got the point where I had read enough to be really worried about p ~ 0.05 - which I reasoned at the time meant there was a good chance something I’d read recently was wrong… and now I need to take into account that the p-value might be a complete mess in the first place. Anyone have a figure for how many papers published at p ~ 0.05 have a Bayesian probability of less than that?
What else can I tell you? I was raised in the Church of England but I imagine I was fortunate in that representatives of the church told me whilst I was still young that it wasn’t possible to answer my questions. From comparison with the rest of the world that alone seemed to make the whole belief structure seem to be on pretty shaky ground.
I’m in the Cambridge area in the UK and have been lurking on their mailing list for a while but haven’t said hello there yet.
I’m in my late-thirties now and soon expecting to become a father for the first time. There is a shocking level of lack of rationality in and around childbirth and significant low-hanging fruit to be taken by being rational. I’ll post about this later. Any other parents found some easy gains by reading the science? I’d love to hear about it.
I’m a software engineer and until recently a project manager for bespoke software projects for small businesses. Right now I’m trying to get some iPhone apps off the ground to add to the passive income flow so that I can spend as much time with my new child as possible.
Topics of interest to me at the moment are:
The rationality and practicalities of changing to a passive income stream.
The practicalities of home schooling.
The practicalities of setting up some better memes for my child than the ones I finished my own childhood with.
Box B is already empty or already full [and will remain the same after I’ve picked it]
Do I have to believe that statement is completely and utterly true for this to be a meaningful exercise? It seems to me that I should treat that as dubious.
It seems to me that Omega is achieving a high rate of success by some unknown good method. If I believe Omega’s method is a hard-to-detect remote-controlled money vaporisation process then clearly I should one-box.
A super intelligence has many ways to get the results it wants.
I am inclined to think that I don’t know the mechanism with sufficient certainty that I should reason myself into two-boxing against the evidence to date.
Does it matter which undetectable unbelievable process Omega is using for me to pick my strategy? I don’t think it does—I have to acknowledge that I’m out of my depth with this alien and arguments against causality defiance or the impossibility of undetectable money vaporisers are not going to help me take the million.
Another tack: Omega isn’t a super intelligence—he’s got a ship, a plan, and a lot of time on his hands. He turns up on millions of worlds to play this game. His guesses are pretty lousy, he guesses right only x percent of the time. We are the only planet on which he’s consistently guessed right. We don’t know what x is in the full sample size. Looking at what his results are here, it looks good. Does it really seem rational to second guess the sample we see?
It seems to me that we have to accept some pretty wild statements and then start reasoning based on them for us to come to a losing strategy. If we doubt the premises to some degree then does it become clear that the most reasonable strategy is one-boxing?