This is an overall well-reasoned post. I don’t want the conclusion to be true, but that is no reason to downvote.
cubefox
I noticed this years ago when the variations of the show Big Brother were on TV in various countries. The show consists of compilations of real people spontaneously talking to each other throughout the day. The difference between this and the scripted conversations we saw on TV before Big Brother is huge. Real people apparently hardly talk in complete sentences, which is why scripted conversations are immediately recognizable as being fake. It’s also strange that this is hardly noticeable in real life when you are actually having conversations.
I think one issue with the “person+time” context is that we may assume that once I know the time, I must know whether it is Friday or not. A more accurate assessment would be to say that an indexical proposition corresponds to a set of possible worlds together with a person moment, i.e. a complete mental state. The person moment replaces the “person + time” context. This makes it clear that “It’s Friday” is true in some possible worlds and false in others, depending on whether my person moment (my current mental state, including all the evidence I have from perception etc) is spatio-temporally located at a Friday in that possible world. This also makes intuitive sense, since I know my current mental state but that alone is not necessarily sufficient to determine the time of week, and I could be mistaken about whether it’s Friday or not.
A different case is “I am here now” or the classic “I exist”. Which would be true for any person moment and any possible world where that person moment exists. These are “synthetic a priori” propositions. Their truth can be ascertained from introspection alone (“a priori”), but they are “synthetic” rather than “analytic”, since they aren’t true in every possible world, i.e. in worlds were the person moment doesn’t exist. At least “I exist” is false at worlds where the associated person moment doesn’t exist, and arguably also “I am here now”.
Yet another variation would be “I’m hungry”, “I have a headache”, “I have the visual impression of a rose”, “I’m thinking about X”. These only state something about aspects of an internal state, so their truth value only depends on the person moment, not on what the world is like apart from it. So a proposition of this sort is either true in all possible worlds where that person moment exists, or false in all possible worlds where that person moment exists (depending on whether the sensation of hungriness etc is part of the person moment or not). Though I’m not sure which truth value they should be assigned in possible worlds where the person moment doesn’t exist. If “I’m thinking of a rose” is false when I don’t exist, is “I’m not thinking of a rose” also false when I don’t exist? Both presuppose that I exist. To avoid contradictions, this would apparently require a three-valued logic, with a third truth value for propositions like that in case the associated person moment doesn’t exist.
And what would this look like? Can you reframe the original argument accordingly?
I meant leogao’s argument above.
It seems the updating rule doesn’t tell you anything about the original argument even when you view information about reference classes as evidence rather than as a method of assigning prior probabilities to hypotheses. Or does it? Can you rephrase the argument in a proper Bayesian way such that it becomes clearer? Note that how strongly some evidence confirms or disconfirms a hypothesis also depends on a prior.
a prior should be over all valid explanations of the prior evidence.
… but that still leaves the problem of which prior distribution should be used.
It seems you are having in mind something like inference to the best explanation here. Bayesian updating, on the other hand, does need a prior distribution, and the question of which prior distribution to use cannot be waved away when there is a disagreement on how to update. In fact, that’s one of the main problems of Bayesian updating, and the reason why it is often not used in arguments.
Or DeepSeek-V3-Base.
I noticed recently is that there are two types of preference and that confusing them leads to some of the paradoxes described here.
Desires as preferences. A desire (wanting something) can loosely be understood as wanting something which you currently don’t have. More precisely, a desire for X to be true is preferring a higher (and indeed maximal) subjective probability for X to your current actual probability for X. “Not having X currently” above just means being currently less than certain that X is true. Wanting something is wanting it to be true, and wanting something to be true is wanting to be more certain (including perfectly certain) that it is true. Moreover, desires come in strengths. The strength of our desire for X corresponds to how strongly you prefer perfect certainty that X is true to your current degree of belief that X is true. These strengths can be described by numbers in a utility function. In specific decision theories, preferences/desires of that sort are simply called “utilities”, not “preferences”.
Preferences that compare desires. Since desires can have varying strengths, the desire for X can be stronger (“have higher utility”) than the desire for Y. In that sense you may prefer X to Y, even if you currently “have” neither X nor Y, i.e. even if you are less than certain that X or Y is true. Moreover, you may both desire X and desire Y, but preferring X to Y is not a desire.
Preferences of type 1 are what arrows express in your graphs (even though you interpret the nodes more narrowly as states, not broadly as propositions which could be true). means that if is the current state, you want to be in . More technically, you could say that in state you disbelieve that you are in state , and the arrow means you want to come to believe you are in state . Moreover, the desires inside a money pump argument are also preferences of type 1, they are about things which you currently don’t have but prefer to have.
What about preferences of type 2? Those are the things which standard decision theories call “preferences” and describe with a symbol like “”. E.g. Savage’s or Jeffrey’s theory.
Now one main problem is that people typically use the money pump argument that talks about preferences of type 1 (desires/utilities) in order to draw conclusions about preferences of type 2 (comparison between two desires/utilities) without noticing that they are different types. So in this form, the argument is clearly confused.
I mean I agree, indexicals don’t really work with interpreting propositions simply as sets of possible worlds, but both sentences contain such indexicals, like “I”, implicitly or explicitly. “I” makes only sense for a specific person at a specific time. “It’s Friday (for me)”, relative to a person and a time, fixes a set of possible worlds where the statement is true. It’s the same for “I try to make sure to check the mail on Fridays”.
Where do you think is the difference? I agree that there is a problem with indexical content, though this affects both examples. (“It’s (currently) Friday (where I live).”)
Even though it doesn’t solve all problems with indexicals, it’s probably better to not start with possible worlds but instead start with propositions directly, similar to propositional logic. Indeed this is what Richard Jeffrey does. Instead of starting with a set of possible worlds, he starts with a disjunction of two mutually exclusive propositions and :
If we wanted to be super proper, then preferences should have as objects maximally specific ways the world could be, including the whole history and future of the universe, down to the last detail. Decision theory involving anything more coarse-grained than that is just a useful approximation
Preferences can be equally rigorously defined over events if probabilities and utilities are also available. Call a possible world , the set of all possible worlds , and an a set such that an “event”. Then the utility of is plausibly This is a probability-weighted average, which derives from dividing the expected utility by , to arrive at the formula for alone.
So if we have both a probability function and a utility function over possible worlds, we can also fix a Boolean algebra of events over which those functions are defined. Then a “preference” between two events is simply .
“Events” are a lot more practical than possible worlds, since events don’t have to be maximally specific, and they correspond directly to propositions, which one can “believe” and “desire” to be true. Degrees of belief and degrees of desire can be described by probability and utility functions respectively.
Maybe the want comes from the cortex, while the urge comes from the cerebellum. Or the want comes from the superego, while the urge comes from the id. Though I agree that at other times it doesn’t feel necessary to talk of urges. I distinguished two different explanations here. One with urges vs wants, one with “want” vs “want to want”. Though you already touched on the latter.
That’s true. Another theory is that our tolerance for “small pieces of highly engaging information” increases the more we consume, so we need a higher dosage, and if we abstain for a while, the tolerance goes down again (the sensitivity increases), and we no longer need as much. Similar to how you “need” less sugar for food to feel appropriately sweet, if you abstained a while from sugar.
I’m not addicted to heroin, but I am addicted to other stuff, and I often wouldn’t call it “wanting” the stuff, but “having an urge”. Like doomscrolling. It would be a stretch to say (in ordinary language) I want to doomscroll. I don’t want to doomscroll but I have an urge to doomscroll.
Alternative theory: Social media (and the Internet in general) consists of countless small pieces of highly engaging information, which hardly require concentration. This means it is both addictive and underutilizes and therefore weakens our ability to concentrate on longer text. The addictiveness makes it hard to stop quitting Internet-based consumption, and the weakened concentration skill makes it hard to start reading a book.
I think the most dangerous version of 3 is a sort of Chesterton’s fence, where people get rid of seemingly unjustified social norms without realizing that they where socially beneficial. (Decline in high g birthrates might be an example.) Though social norms are instrumental values, not beliefs, and when a norm was originally motivated by a mistaken belief, it can still be motivated by recognizing that the norm is useful, which doesn’t require holding on to the mistaken belief.
Do you have an example for 4? It seems rather abstract and contrived.
Generally, I think the value of believing true things tends to be almost always positive. Examples to the contrary seem mostly contrived (basilisk-like infohazards) or only occur relatively rarely. (E.g. believing a lie makes you more convincing, as you don’t technically have to lie when telling the falsehood, but lying is mostly bad or not very good anyway.)
Overall, I think the risks from philosophical progress aren’t overly serious while the opportunities are quite large, so the overall EV looks comfortably positive.
You are casting preference to only extend into the future. I guess that is the largest usage of “preference.”. But people also frequently say things like “I wish that hadn’t happened to me” so it’s also frequently used about the past.
I thought about this, and I agree “regret” wishes like this are clearly possible. We can think of wishes (desires, “wants”) as preferences, namely as preferring something we don’t have to something we do have. Or more precisely and generally, preferring being certain that something is true to the actual current case of being less than certain that it is true. This means someone can wish that they hadn’t been born. Which means they prefer not having been born to the actual case of having been born. More technically, this would mean preferring a certainty (100% probability) of not having been born to the current 0% belief of not having been born.
However, this means we cannot wish for anything we already have. Because that would be preferring having something to having it. We cannot prefer being certain that X is true to our current degree of belief of X being true, if that current degree of belief is such that we are already certain that X is true. In short, we can’t prefer being certain that X to our current state of being certain that X.[1]
Which means I can wish that I had never been born (regret being born), but I cannot wish that I have been born. So it can actually be bad for someone to come into existence, namely when they, after coming into existence, regret having been born (e.g. because of very bad life conditions). But it cannot similarly be good for them to have come into existence, as there is no possible preference of this sort. If they have come to exist, they can’t prefer that to having come to exist (as elaborated above), and if they don’t come to exist, they can’t regret that they didn’t come to exist, because someone who doesn’t exist can’t form any preference, including preferring to exist over the actual current state of not existing.
In fact, this point has been argued before in the context of decision theory. In chapter 4.4 of Richard Jeffrey’s book The Logic of Decision he writes (emphasis in the original):
Socrates argues (Symposium, 200) that Love is not a god because to desire something is to be in want of it: you cannot desire what you already have. The point is also made in contemporary dictionaries. Thus, The Concise Oxford Dictionary defines a desire as an unsatisfied appetite, longing, wish, or craving. This account of the matter seems roughly right, although it needs some refinement. Thus, to vary the example of the Symposium, since people do not always know when they are loved, it is entirely possible to desire someone’s love when you already have it. Therefore, it seems better to say that you cannot desire what you think you have. Taking propositions as the objects of desire, the doctrine becomes: one who believes that a proposition is true cannot desire that it be true.
I would clarify that strictly speaking you can believe something to be true but still wish it to be true, as long as your belief (as is normally the case) falls short of certainty. Thus, if there is a thunderstorm, you may be 99% sure you won’t be struck by lighting, but you can still desire not to be struck by lightning, insofar you (more or less strongly) prefer a 100% certainty not be be struck to your current 99% belief that you will not be struck. Another way of looking at this: if being struck by lightning is “extremely bad”, say , then the expected utility of it for a 1% chance/belief is . Which may still be “very bad”.
- ↩︎
Of course we can e.g. still wish for (being certain of) things happening in the future, insofar we are not already certain they will happen.
- ↩︎
X because Y implies X and Y, though not the other way round.