How many more microbes have been killed by the power wielders?
benelliott
When things get too complicated, it sometimes makes sense to stop and wonder: Have I asked the right question?
Enrico Bombieri
And there are times when you don’t get to choose whether or not you wrestle the gorilla.
Nobody here is claiming that people naturally reason in a Bayesian way.
We are claiming that they should.
I didn’t interpret the quote as implying that it would actually work, but rather as implying that (the Author thinks) Hanson’s ‘people don’t actually care’ arguments are often quite superficial.
I don’t know if this is typical, but I recently a professional trader stated in an email to me that he knew very little about Bitcoin and basically had no idea what to think of it. This may hint that the lack of interest isn’t based on certainty that bitcoin will flop, but simply on not knowing how to treat it and sticking to markets where they do have reasonably well-understood ways of making a profit, since exposure to risk is a limited resource.
The problem here is that you’ve not specified the options in enough detail, for instance you appear to prefer going to Ecaudor with preparation time to going without preparation time, but you haven’t stated this anywhere. You haven’t given the slightest hint whether you prefer Iceland with preparation time to Ecuador without. VNM is not magic, if you put garbage in you get garbage out.
So to really describe the problem we need six options:
A1 - trip to Ecuador, no advance preparation A2 - trip to Ecuador, advance preparation B1 - laptop B2 - laptop, but you waste time and money preparing for a non-existant trip. C1 - trip to Iceland, no advance preparation C2 - trip to Iceland, advance preparation
Presumably you have preferences A2 > A1, B1 > B2, C2 > C1. You have also stated A > B > C, but its not clear how to interpret this, A2 > B1 > C2 seems the most charitable. You seem to also think C2 > B2, but you haven’t said so so maybe I’m wrong.
You have four possible choices, D1 = (A1 or B1), D2 = (A2 or B2), E1 = (A1 or C1) and E2 = (A2 or C2)
The VNM axioms can tell us that E2 > E1, this also seems intuitively right. If we also accept C2 > B2 then they can tell you that E2 > D2. They don’t tell us anything about how to judge between D2 and E1, since the decision here depends on the size rather than ordering of your preferences. None of this seems remotely counter-intuitive.
In short, ‘value of information’ isn’t some extra factor that needs to be taken into account on top of decision theory. It can be factored in within decision theory by correctly specifying your possible options.
Furthermore, information isn’t binary, it doesn’t suddenly appear once you have certainty and not before, if you take into account the existence of probabilistic partial information then you should find the exact same results pop out.
The same goes for the Allais paradox: having certitude of receiving a significant amount of money ($24 000) has a value, which is present in choice 1A, but not in all others (1B, 2A, 2B).
Why does it have value? The period where you have certainty in 1A but not in the other 3 probably only lasts a few seconds, and there aren’t any other decisions you have to make during it.
The biased random generator is also just as likely to output 0000000000 as it is 0010111101.
This is the mistake.
If you actually do the maths the biased generator is significantly more likely to output 0000000000 than 0010111101.
For a much simpler example, suppose we run on two times. The random generator outputs 00 25% of the time, 01 25% of the time, 10 25% of the time and 11 25% of the time.
For the biased generator, we need calculus. First suppose its p(0) = x. Then p(00 | p(0) = x ) = x^2. Since we have what is essentially a uniform distribution over [0,1] (the presence of absence of a single point makes no difference) we need to integrate f(x) = x^2 over the interval [0,1], which gives and answer of p(00) = 1⁄3. The same method gives p(11) = 1⁄3 and p(01) = p(10) = 1⁄6.
The general rule, is that if we run it n times, for any k between 0 and n the chance of it outputting k 1s is 1/(n+1), and that probability is shared out evenly among all possible different ways of outputing k 1s (also derivable from calculus). Thus p(0000000000) = 9.1% while p(0010111101) = 0.043%.
Nitpick, the detector lies on double-six regardless of the outcome, so the likelihood ratio is 35:1, not 36:1.
Lets do a check. Assume a worst case scenario where nobody publishes false results at all.
To get three p < 0.05 studies if the hypothesis is false requires on average 60 experiments. This is a lot but is within the realms of possibility if the issue is one which many people are interested in, so there is still grounds for scepticism of this result.
To get one p < 0.001 study if the hypothesis is false requires on average 1000 experiments. This is pretty implausible, so I would be much happier to treat this result as an indisputable fact, even in a field with many vested interests (assuming everything else about the experiment is sound).
Of course, imbuing your clothing with intelligence so it will absorb killing curses has some truly horrifying moral implications.
ER personnel know this so well that they go to great lengths to ensure it doesn’t happen to them
Minor quibble, just saying something like “Promise me if you find me like this that you’ll kill me.” does not constitute ‘great lengths’. Depending on tone and context it may not even be meant seriously, and if it is its still not a lot of effort to put towards to goal of not ending up in intensive care.
Vote this up if you would like me to post the results in the Discussion section
Out of interest, has any research been done into ways to avoid the whole ‘power corrupts’ effect other than ‘shun power’?
In ZF set theory, consider the following three statements.
I) The axiom of choice is false
II) The axiom of choice is true and the continuum hypothesis is false
III) The axiom of choice is true and the continuum hypothesis is true
None of these is provably true or false so they all get assigned probability 0.5 under your scheme. This is a blatant absurdity as they are mutually exclusive so their probabilities cannot possibly sum to more than 1
Admitting error clears the score and proves you wiser than before.
--Arthur Guiterman
Maybe its the same reason that broomsticks use Aristotelian physics. If magic was intelligently designed by people who didn’t know much science you would expect it to obey the law of “it makes sense so long as you don’t think too hard”.
Roughly speaking the problem is that mathematicians cannot come up with a meaningful definition of volume that applies to all sets of points (when I say cannot, I mean literally impossible, not just that they tried really hard then gave up). Instead, we have a definition that applies to a very large collection of sets of points, but not all of them.
Sets from that collection have a well defined volume, and any transformation which always leaves this unchanged is called volume preserving.
Sets from outside it, which the sets in the Banach Tarski paradox are, don’t have a defined volume at all, and thus can interact with volume-preserving transformations in all sorts of weird ways.
It seems like the term ‘Pascal’s Mugging’ is having its meaning degraded.
I believe the original article that introduced the idea was careful to make sure that a simple expected utility calculation would show that accepting the offer was rational. To do this it deliberately exploited explosive functions like Knuth up-arrow notation to make sure the utilities grew faster than the probabilities shrank. This is what made it scary, by our current understanding an ideal rational agent would hand over the money, it calls into question what we mean by ‘ideal rational agent’.
The example given is NOT a Pascal’s mugging. One life, even if it is my own, has nowhere near enough utility to overcome the astonishingly tiny probability of the message being correct. Even if I cared about nothing else, there are more effective uses of those few seconds in terms of increasing my life expectancy. The people who comment are being irrational (assuming they are taking it seriously and not just playing along for fun).
All the other examples given in the thread are the same (with the possible exception of religion). Please can we make an effort to keep the original meaning, it is more interesting as a concept for refining our understanding how rational agents would work.
--Paul Graham