the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.
This would be true were it not for your meta-rule. But the criteria for deciding whether something is a legitimate exception may be hazy and intuitive, and not prone to being stated in a simple form. This doesn’t mean that the criteria are bad though.
For example, I wouldn’t dream of formulating a rule about cookies that covered the case “you can eat them if they’re the best in the state”, but I also wouldn’t say that just because someone is trying to avoid eating cookies means they can’t eat the best-in-state cookies. It’s a judgement call. If you expect your judgement to be impaired enough that following rigid explicitly stated rules will be better than making judgement calls, then OK, but it is far from obvious that this is true for most people.
The OP didn’t give any argument for SPECKS>TORTURE, they said it was “not the point of the post”. I agree my argument is phrased loosely, and that it’s reasonable to say that a speck isn’t a form of torture. So replace “torture” with “pain or annoyance of some kind”. It’s not the case that people will prefer arbitrary non-torture pain (e.g. getting in a car crash every day for 50 years) to a small amount of torture (e.g. 10 seconds), so the argument still holds.
Once you introduce any meaningful uncertainty into a non-Archimedean utility framework, it collapses into an Archimedean one. This is because even a very small difference in the probabilities of some highly positive or negative outcome outweighs a certainty of a lesser outcome that is not Archimedean-comparable. And if the probabilities are exactly aligned, it is more worth your time to do more research so that they will be less aligned, than to act on the basis of a hierarchically less important outcome.
For example, if we cared infinitely more about not dying in a car crash than about reaching our destination, we would never drive, because there is a small but positive probability of crashing (and the same goes for any degree of horribleness you want to add to the crash, up to and including torture—it seems reasonable to suppose that leaving your house at all very slightly increases your probability of being tortured for 50 years).
For the record, EY’s position (and mine) is that torture is obviously preferable. It’s true that there will be a boundary of uncertainty regardless of which answer you give, but the two types of boundaries differ radically in how plausible they are:
if SPECKS is preferable to TORTURE, then for some N and some level of torture X, you must prefer 10N people to be tortured at level X than N to be tortured at a slightly higher level X’. This is unreasonable, since X is only slightly higher than X’, while you are forcing 10 times as many people to suffer the torture.
On the other hand, if TORTURE is preferable to SPECKS, then there must exist some number of specks N such that N-1 specks is preferable to torture, but torture is preferable to N+1 specks. But this is not very counterintuitive, since the fact that torture costs around N specks means that N-1 specks is not much better than torture, and torture is not much better than N+1 specks. So knowing exactly where the boundary is isn’t necessary to get approximately correct answers.
To repeat what was said in the CFAR mailing list here: This “bet” isn’t really a bet, since there is no upside for the other party; they are worse off than when they started in every possible scenario.
I don’t think that chapter is trying to be realistic (it paints a pretty optimistic picture),
Sure, in that case there is a 0% counterfactual chance of heads, your words aren’t going to flip the coin.
The question “how would the coin have landed if I had guessed tails?” seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying “heads” or “tails” (as well as quantum fluctuations). It’s not clear to me what the answer to this question is, though I would guess that the coin’s counterfactual probability of landing heads is somewhere strictly between 0% and 50%.
Reviewer is obliged to find all errors.
Not true. A reviewer’s main job is to give a high-level assessment on the quality of a paper. If the assessment is negative then usually they do not look for all specific errors in the paper. A detailed list of errors is more common when the reviewer recommends the journal to accept the paper (since then the author(s) can edit the paper and then publish in the journal) but still many reviewers do not do this (which is why it is common to find peer-reviewed papers with errors in them).
At least, this is the case in math.
You don’t harbor any hopes that after reading your post, someone will decide to cooperate in the twin PD on the basis of it? Or at least, if they were already going to, that they would conceptually connect their decision to cooperate with the things you say in the post?
I am not sure how else to interpret the part of shminux’s post quoted by dxu. How do you interpret it?
My point was that intelligence corresponds to status in our world: calling the twins not smart means that you expect your readers to think less of them. If you don’t expect that, then I don’t understand why you wrote that remark.
I don’t believe in libertarian free will either, but I don’t see the point of interpreting words like “recommending” “deciding” or “acting” to refer to impossible behavior rather than using their ordinary meanings. However, maybe that’s just a meaningless linguistic difference between us.
A mind-reader looks to see whether this is an agent’s decision procedure, and then tortures them if it is. The point of unfair decision problems is that they are unfair.
dxu did not claim that A could receive the money with 50% probability by choosing randomly. They claimed that a simple agent B that chose randomly would receive the money with 50% probability. The point is that Omega is only trying to predict A, not B, so it doesn’t matter how well Omega can predict B’s actions.
The point can be made even more clear by introducing an agent C that just does the opposite of whatever A would do. Then C gets the money 100% of the time (unless A gets tortured, in which case C also gets tortured).
I note here that simply enumerating possible worlds evades this problem as far as I can tell.
The analogous unfair decision problem would be “punish the agent if they simply enumerate possible worlds and then choose the action that maximizes their expected payout”. Not calling something a decision theory doesn’t mean it isn’t one.
Again, this is just a calculation of expected utilities, though an agent believing in metaphysical free will may take it as a recommendation to act a certain way.
Are you not recommending agents to act in a certain way? You are answering questions from EYNS of the form “Should X do Y?”, and answers to such questions are generally taken to be recommendations for X to act in a certain way. You also say things like “The twins would probably be smart enough to cooperate, at least after reading this post” which sure sounds like a recommendation of cooperation (if they do not cooperate, you are lowering their status by calling them not smart)
Games can have multiple Nash equilibria, but agents still need to do something. The way they are able to do something is that they care about something other than what is strictly written into their utility function so far. So the existence of a meta-level on top of any possible level is a solution to the problem of indeterminacy of what action to take.
(Sorry about my cryptic remark earlier, I was in an odd mood)
There I was using “to be” in the sense of equality, which is different from the sense of existence. So I don’t think I was tabooing inconsistently.
Maybe there is no absolutely stable unit, but it seems that there are units that are more or less stable than others. I would expect a reference unit to be more stable than the unit “the difference in utility between two options in a choice that I just encountered”.
This seems like a strawman. There’s a naive EU calculation that you can do just based on price, tastiness of sandwich etc that gives you what you want. And this naive EU calculation can be understood as an approximation of a global EU calculation. Of course, we should always use computationally tractable approximations whenever we don’t have enough computing power to compute an exact value. This doesn’t seem to have anything to do with utility functions in particular.
Regarding the normalization of utility differences by picking two arbitrary reference points, obviously if you want to systematize things then you should be careful to choose good units. QALYs are a good example of this. It seems unlikely to me that a re-evaluation of how many QALYs buying a sandwich is worth would arise from a re-evaluation of how valuable QALYs are, rather than a re-evaluation of how much buying the sandwich is worth.
Right, so it seems like our disagreement is about whether it is relevant whether the value of a proposition is constant throughout the entire problem setup, or only throughout a single instance of someone reasoning about that setup.