True, but the people studying whether compound XYZ affect tumour growth are not preselected to believe that it is.
AlexMennen(Alex Mennen)
The ability to get a bad result because of a sufficiently wrong prior is not a flaw in Bayesian statistics; it is a flaw is our ability to perform Bayesian statistics. Humans tend to overestimate their confidence of probabilities with very low or very high values. As such, the proper way to formulate a prior is to imagine hypothetical results that will bring the probability into a manageable range, ask yourself what you would want your posterior to be in such cases, and build your prior from that. These hypothetical results must be constructed and analyzed before the actual result is obtained to eliminate bias. As Tyrrell said, the ability of a wrong prior to result in a bad conclusion is a strength because other Bayesians will be able to see where you went wrong by disputing the prior.
“I’ll bite the bullet and say global warming is the perfect example here. It’s pretty clear to me that many people hold their positions on this issue—pro and contra—for political/social reasons rather than evidential ones.”
I used to think that global warming was a poor example of this because while the right wing has plenty of reasons to oppose actions to fight global warming, and thus irrational reasons to force themselves to believe that global warming does not exist, the left wing does not have any reasons to support actions to fight global warming aside from evidence that global warming is a threat. Then it occurred to me that many people on the left actually do have alternate motives for pushing anti-global warming actions: other people on the left support it too (see Eliezer’s The Sky is Green/Blue parable, and this article too, I suppose). This is even more irrational, but due to the stunning level of irrationality among humans on all sides of the political spectrum, is probably a factor for some.
This interpretation is interesting and does seem to have merit. I suppose that from an evolutionary perspective, it is inevitable that any being advanced enough to have a concept of self would also identify its successor mind moment with that self.
This interpretation also complicates ethics. Traditionally, the way one treats others is expected to be at least quasi-utilitarian in nature, but providing for one’s own future prosperity is generally considered a question of wisdom rather than morals. However, given that a person and his future mind moment are merely similar entities connected by a near-continuous transformation, rather than being the same entity, it would seem that there must be a symmetry between the moral implications of a person’s treatment of others and of his own future.
Edit: Also, quantum suicide only works as intended if the mind is terminated within one mind-moment-span of the time in which triggering of the quantum suicide device becomes inevitable. Otherwise, it denies a successor to at least one mind moment.
“P(W=X and T=Y) = P(W=X) P(T=Y|W=X); P(W=X and T=Y) = exp(-len(W)) P(T=Y|W=X)” therefore P(W=X) = exp(-len(W)). I’m trying to find a way to get this to sum to 1 across all W, but failing. Is there something wrong with this prior probability, or am I doing my math wrong?
“For example, a world-program that contains 10^10 random instructions is much less likely than one that contains 10^10 copies of the same instruction.” Is that really necessary if a world-program with 1 copy of an instruction is functionally indistinguishable from a world-program with 10^10 copies of that single instruction?
“Since we can transfer complexity back and forth between W and T, we can’t justify applying Occam’s Razor to one but not the other, so it makes sense to apply it to T. This also means that we should also treat T as compressible; it is more likely that the universe is 3^^^3 steps old than that is 207798236098322674 steps old.” I don’t think Occam’s Razor works that way.
The Tegmark Level IV multiverse idea is quite elegant, and this was a fairly well-thought-out post, although with some problems, most of which Mallah already identified. One more thing though:
It raises other strange anthropic questions too. The one that comes most immediately to my mind is this: If every possible mathematical structure is real in the same way that this universe is, then isn’t there only an infinitesimal probability that this universe will turn out to be ruled entirely by simple regularities? Given a universe governed by a small set of uniformly applied laws, there will be an infinity of universes governed by the same laws plus arbitrary variations, possibly affecting the internally observable structure only at very specific points in space and time. This results in a sort of anti–Occam’s Razor (Macco’s Rozar? Occam’s Beard Tonic?), where the larger the irregularity, the more likely it becomes over the space of all possible universes, because there are that many more ways for it to happen. (For example, there is a universe — actually, a huge number (possibly infinity) of barely different universes — identical to this one, except that, for no reason explainable by the usual laws of quantum mechanics, but not ruled out as a logically possible law unto itself, your head will explode as soon as you finish reading this post. I hope that possibility does not dissuade you from doing so, but I accept no responsibility if this does turn out to be one of those universes.)
This has to decrease the probability of this theory, as stated, being correct, as we appear to live in a fairly regular universe. However, it could be possible that simpler systems do have more measure even in a level IV multiverse.
Also, c
If they can get out at all, then I would assume that they are a threat and can force you to buy them a beer if they want one.
An atheist walked into a bar, but seeing no bartender he revised his initial assumption and decided he only walked into a room.
http://friendlyatheist.com/2008/02/29/complete-the-atheist-joke-1/
Many atheists were formerly theists.
Still, I suppose it might have been better as “A scientist walked into what he thought was a bar, but seeing no bartender, barstools, or drinks, he revised his initial assumption and decided he only walked into a room.”
In actual statistical practice, choosing good priors clearly requires skills and techniques that aren’t part of the naive Bayesian canon.
This is true. That is when it is time to use frequentist approximations.
Frequentist techniques can be useful in certain situations, but only because of the great difficulty in assigning accurate priors and the fact that we often have such overwhelmingly large amounts of evidence that any hypothesis with a substancial prior will probably have a posterior either near-zero or near-one if proper Bayesian reasoning is used. In those situations, judging the probability of a hypothesis by its p-values saves a lot of complicated work and is almost as good. However, when there is not overwhelming evidence or when the evidence points to a hypothesis with a negligible prior, frequentist statistics ceases to provide an adequate approximation. Always remember to think like a Bayesian, even when using frequentist methods.
Also, this was a terrible example of frequentist statistics avoiding the use of priors, because the prior was that the probability that a coin would land heads anything other than 90% or 10% of the time is zero. This is a tremendous assertion, and refusing to specify the probabilities of 90% heads vs. 10% heads just makes the prior incomplete. Saying that there is no prior is like saying that I did not make this post just because the last sentence lacks a period
I guess it did. Sadly, I don’t remember what I intended to say.
Hi.
Why would Less Wrong have an abnormally high percentage of lurkers? Also, being a lurker is not in black and white. For example. I mostly just lurk, but I post comments occasionally.
This definition of lurker has the advantage of being clear-cut enough that numbers are meaningful, but does not represent as important a group in online community dynamics as the definition of lurker as someone who reads but does not post, regardless of whether or not he has an account.
Also, with that definition, I have not been a lurker for quite a while, and yet I appear to be accumulating free karma points for saying “hi” anyway. Not complaining.
Unfortunately, pretending that a thought or action came from someone else will probably not allow you to analyze it from a neutral perspective because it was still recorded from a biased perspective. It’s like how people can be encouraged to cheer for the protagonist in a novel or movie without evidence that they are doing something worth doing. Looking at thoughts from an outside view might make luminosity easier, but not easy.
No, problems 2 and 3 are symmetrical in a more than superficial way. In both cases, the proper course of action is to attempt to conduct an unbiased evaluation of the evidence and of the biases affecting each of you. The difference is, in problem 3, we have already encountered and evaluated numerous nearly identical situations, so it is easy to come to the proper decision, whereas in problem 2, the situation could be new and unique, and missing background information about the effects of bias on the two individuals and the accuracy of their predictions becomes important.
I don’t think an incentives system for game activity is a good idea. For most online games, more in-game activity helps gain in-game utility. A gaming clan is intended to help its members all gain in-game utility, and incentives for game inactivity would be counterproductive towards those ends. Furthermore, forming a SGG or LWGG would probably draw many of us towards whatever game it is implemented in while likely only resulting in a slight decrease in activity for those already there (maybe an increase for those games where managing a clan takes a lot of work).
However, the idea of forming gaming clans as a way to get publicity for LW or SIAI could be a good one in games with a large social aspect. Still, I would not recommend incentives for inactivity, because increased interaction with other players would increase the effectiveness of a gaming publicity drive. Also, I think a Less Wrong clan would be much more effective than a Singularitarian clan, because most people would probably at first write of “Singularitarian” as some nutcase cult belief, and have no interest, but there are many people who desire to become less wrong and could be helped out a lot by this site, and many of them would learn about SIAI from LW after we hook them in.
Another possibility to use gaming productively is to try to start up a LW- or SIAI-backed MMO and use it to generate revenue. It seems unlikely that this would be practical, but I’m not sure of that, so I figure throwing that idea out there can’t hurt.
Do we have somewhere we send people looking to ponder philosophy less analytically than we? Do we need one? I think it would be more worthwhile to try to convince them to take a more analytical perspective.
Let there be N nodes. Let a be an agent from the set A of all agents. Let vai be the value agent a places on node i. Let wij be the weight between nodes i and j. Let the “averaged agent” b mean a constructed agent b (not in A) for which vbi = average over all a of vai. Write “the sum over all i and j of S” as sum_{i, j}(S).
Average IC = ICa = - sum_{i, j} [wij x sum_a (vai x vaj)] / |A|
Expected IC from average agent b = ICb = - sum_{i, j} [wij x (sum_a(vai) / |A|) x (sum_a(vaj) / |A|)]
Am I the only one who’s completely lost by this?
I got a 30. It’s possible that I could have mild aspergers, but as far as I can tell, I don’t match any common psychological pigeonhole.
Also, I skimmed the article and didn’t see the request for people who have not yet seen the article not to take the poll until I responded to the poll.
Parapsychologists make a poor control group of scientists because part of their job is collecting evidence that parapsychology works. In science, that step is already done. Biologists do not need to prove that life works, because life exists. Physicists do not need to prove that physics works, because physics, by definition, IS the way the universe works. Einstein did not dream up relativity and then start looking for evidence to support it. He looked at the evidence that was available, and came up with relativity as a way to explain it. Parapsychologists do it the other way around.