Why tell readers that their other selves in other worlds are dying of cancer, so they should really think about cryonics, and then go on and make a post like this?
If I can’t even get a glimpse of these other worlds, and my decisions don’t alter them, why would that make utilitarianism seem more valid (it isn’t)?
Just to make sure I’m getting this right… this is sort of along the same lines of reasoning as quantum suicide?
It depends on the type of “fail”—quenches are not uncommon. And also their timing—the LHC is so big, and it’s the first time it’s been operated. Expect malfunctions.
But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explanation. If it’s mechanically sound and still miraculously failing every time the dials get turned up high, it’s likely enough to consider.
“After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?”
Not sure what is meant by that.