One reason for using squared errors, which may be good or bad depending on the context, is that it’s usually easier to Do Mathematics on it.
when you flush, you create a fine mist of very-much-not-clean toilet water that covers everything in the bathroom, including your hands.
This is why I always close the lid, if there is one, before flushing.
This is the same 6500-word essay linked in the OP. It might be helpful to note that (I think) the relevant part is the very last two paragraphs. And you say there that you are not sure what Becker meant by practicing dying. The concrete method you describe is:
I’ll lay down in bed and imagine that I’m about to die in the next 5-15 minutes. … When I first started doing this I found it very distressing, but over time I’ve gotten a lot more capable at soberly considering the end of my existence.
Ok, I imagined it. I shrug.
I would prefer to not die, and most ways of dying range from unpleasant to dreadful, besides the fact that they end with death. I have had a narrow brush with one of those, and seen it happen to a few other people, and of course I know that it happens to everyone. That pretty much covers my attitude to death and dying. But I get the impression that this is not what you or Ernest Becker mean by “fear of death”. Do you mean something more than this?
Composing Chinese with moveable type is still slower, because you need at least a thousand, maybe several thousand, different characters. Just physically selecting them is time-consuming. Back in the days of mechanical typewriters, attempts were made to design typewriters for Chinese and Japanese, but using them was no faster than writing by hand. A skilled typist on an alphabetic typewriter can go much faster.
I already said that I think that thinking in terms of infinitary convex combinations, as you’re doing, is the wrong way to go about it; but it took me a bit to put together why that’s definitely the wrong way.
Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he’s able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he’s not necessarily arbitrary actions).
Savage doesn’t assume probability!
Savage doesn’t assume probability or utility, but their construction is a mathematical consequence of the axioms. So although they come later in the exposition, they mathematically exist as soon as the axioms have been stated.
So if you want to disallow certain actions… how do you specify them?
I am still thinking about that, and may be some time.
As a general outline of the situation, you read P1-7 ⇒ bounded utility as modus ponens: you accept the axioms and therefore accept the conclusion. I read it as modus tollens: the conclusion seems wrong, so I believe there is a flaw in the axioms. In the same way, the axioms of Euclidean geometry seemed very plausible as a description of the physical space we find ourselves in, but conflicts emerged with phenomena of electromagnetism and gravity, and eventually they were superseded as descriptions of physical space by the geometry of differential manifolds.
It isn’t possible to answer the question “which of P1-7 would I reject?” What is needed to block the proof of bounded utility is a new set of axioms, which will no doubt imply large parts of P1-7, but might not imply the whole of any one of them. If and when such a set of axioms can be found, P1-7 can be re-examined in their light.
The paper is not the degree. It is a certificate of having the degree. The degree is the fact of having had the degree conferred. This is an objective historical fact that cannot be repossessed, short of 1984 with its memory holes and workers keeping records updated to agree with currently decreed official truth (that is, official lies). Even if the university is obliged to rescind the conferral, that merely adds another historical fact to the record. If an employer regards the recission as a penalty for defaulting on a student loan, they are free to take that as evidence of the student’s financial standing but disregard it as evidence against their academic record.
“My password to … is …”
Is this the personal or impersonal “you”?
Instead, it’s basically “Moral uncertainty is uncertainty about moral matters”, which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.
What need is there for a definition of “moral uncertainty”? Empirical uncertainty is uncertainty about empirical matters. Logical uncertainty is uncertainty about logical matters. Moral uncertainty is uncertainty about moral matters. These phrases mean these things in the same way that “red car” means a car that is red, and does not need a definition.
If one does not believe there are objective moral truths, then “Moral uncertainty is uncertainty about moral matters” might feel problematic. The problem lies not in “uncertainty” but in “moral matters”. But that is an issue you have postponed.
How does this work in the military? They have a very deep hierarchy: is life in the army above private and below commander-in-chief also a maze?
The link is 451 for me: “Unavailable due to legal reasons”. The specifics:
We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time. For any issues, contact firstname.lastname@example.org or call (301) 722-4600.
Prima facie that looks like bullshit, but recognising that doesn’t get me the web page. Time I looked into a VPN account. Any suggestions?
BTW, mousing over the same link on your web page gives me a popup saying “Too many requests”, which none of the others do. What’s up there?
Again, I’m simply not seeing this in the paper you linked? As I said above, I simply do not see anything like that outside of section 9, which is irrelevant. Can you point to where you’re seeing this condition?
In Fishburn’s “Bounded Expected Utility”, page 1055, end of first paragraph (as cited previously):
However, we shall for the present take (for any -algebra that contains each ) since this is the Blackwell-Girshick setting. Not only is an abstract convex set, but also if and for and , then .
That depends on some earlier definitions, e.g. is a certain set of probability distributions (the “d” stands for “discrete”) defined with reference to some particular -algebra, but the important part is that last infinite sum: this is where all infinitary convex combinations are asserted to exist. Whether that is assigned to “background setup” or “axioms” does not matter. It has to be present, to allow the construction of St. Petersburg gambles.
Will address the rest of your comments later.
A further short answer. In Savage’s formulation, from P1-P6 he derives Theorem 4 of section 2 of chapter 5 of his book, which is linear interpolation in any interval. Clearly, linear interpolation does not work on an interval such as [17,Inf], therefore there cannot be any infinitely valuable gambles. St. Petersburg-type gambles are therefore excluded from his formulation.
Savage does not actually prove bounded utility. Fishburn did this later, as Savage footnotes in the edition I’m looking at, so Fishburn must be tackled. Theorem 14.5 of Fishburn’s book derives bounded utility from Savage’s P1-P7. His proof seems to construct a St. Petersburg gamble from the supposition of unbounded utility, deriving a contradiction. I shall have to examine further how his construction works, to discern what in Savage’s axioms allows the construction, when P1-P6 have already excluded infinitely valuable gambles.
Or if you have some formalism where preferences can be undefined (in a way that is distinct from indifference), by all means explain it… (but what happens when you program these preferences into an FAI and it encounters this situation? It has to pick. Does it pick arbitrarily? How is that distinct from indifference?)
A short answer to this (something longer later) is that an agent need not have preferences between things that it is impossible to encounter. The standard dissolution of the St. Petersberg paradox is that nobody can offer that gamble. Even though each possible outcome is finite, the offerer must be able to cover every possible outcome, requiring that they have infinite resources.
Since the gamble cannot be offered, no preferences between that gamble and any other need exist. If your axioms require both that preference must be total and that St. Petersburg gambles exist, I would say that that is a flaw in the axioms. Fishburn (op. cit., following Blackwell and Girschick, an inaccessible source) requires that the set of gambles be closed under infinitary convex combinations. I shall take a look at Savage’s axioms and see what in them is responsible for the same thing.
Looking at the argument from the other end, at what point in valuing numbers of intelligent lives does one approach an asymptote, bearing in mind the possibility of expansion to the accessible universe? What if we discover that the habitable universe is vastly larger than we currently believe? How would one discover the limits, if there are any, to one’s valuing?
“I feel alone” isn’t a statement of something being a failure. It’s just a statement about the current emotional state.
Perhaps this is a tangent to the discussion, but “I feel alone” is not a statement about an emotional state. It is a confused statement that on the surface appears to be about emotions (“I feel...”) but the thing that follows those first two words is not an emotion, but a claim about the world: “(I am) alone.”
“I feel sad” is a description of an emotional state. “I feel sad about...” or “I feel sad that..” are descriptions of emotional states, together with, but separate from, a statement of a belief about the world. “I feel alone” and similar phrases, such as the general pattern “I feel that...”, confuse feelings with beliefs.
Every statement of the form “I feel that...” is false, because what follows the “that” is a belief about the world, not a feeling. Acknowledging it as a belief makes it possible to consider “Is this belief true? Why do I believe it is true?” Miscalling it a feeling protects it from testing against reality: “How can you question my FEELINGS?”
But, to put it simply, if your ethical assumptions contradict the mathematics, it’s not the mathematics that’s wrong.
The mathematics includes axioms, and axioms certainly can be wrong. That is, they can be false of the things in the real world that they were invented in order to describe. As Einstein said, “As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”
I haven’t studied the proof of boundedness in detail, but it seems to be that unbounded utilities allow St. Petersburg-type combinations of them with infinite utilities, but since each thing is supposed to have finite utility, that is a contradiction. Or if infinite utilities are not immediately a problem, then by a more complicated argument, involving constructing multiple St. Petersburg-type combinations and demonstrating that the axioms imply that there both should and should not be a preference between them.
I believe that the first of those arguments is what Fishburn is alluding to in his paper “Bounded Expected Utility” (paywalled, also sci-hubbed) when he says that it is “easily seen to be bounded” (1st paragraph of section 4, p.1055). (Fishburn’s book is rather too dense to speed-read all the way to his boundedness theorems.) He does not give details, but the argument that I conjecture from his text is that if there are unbounded utilities then one can construct a convex combination of infinitely many of them that has infinite utility (and indeed one can), contradicting the proof from his axioms that the utility function is a total function to the real numbers.
But by a similar argument, one might establish that the real numbers must be bounded, when instead one actually concludes that not all series converge and that one cannot meaningfully compare the magnitudes of divergent infinite series. Inf–Inf = NaN, as IEEE 754 puts it. All it takes is sufficient art in constructing the axioms to make them seem individually plausible while concealing the contradiction that will be sprung.
Individually plausible axioms do not necessarily have a plausible union.
I note that in order to construct convex combinations of infinitely many states, Fishburn extends his axiom 0 to allow this. He does not label this extension separately as e.g. “Axiom 0*”. So if you were to ask which of his axioms to reject in order to retain unbounded utility, it could be none of those labelled as such, but the one that he does not name, at the end of the first paragraph on p.1055. Notice that the real numbers satisfy Axiom 0 but not Axiom 0*. It is that requirement that all infinite convex combinations exist that surfaces later as the boundedness of the range of the utility function.
While searching out the original sources, I found a paper indicating that at least in 1993, bounded utility theorems were seen as indicating a problem with Savage’s axioms: “Unbounded utility for Savage’s “Foundations of Statistics” and Other Models”, by Peter Wakker. There is another such paper from 2014. I haven’t read them, but they indicate that proofs of boundedness of utility are seen as problems for the axioms, not discoveries that utility must be bounded.
The title of Aumann’s paper is just a pithy slogan. What the slogan means as the title of his paper is the actual mathematical result that he proves. This is that if two agents have the same priors, but have made different observations, then if they share only their posteriors, and each properly updates on the other’s posterior, and repeat, then they will approach agreement without ever having to share the observations themselves. In other papers there are theorems placing practical bounds on the number of iterations required.
In actual human interaction, there is a large number of ways in which disagreements among us may fall outside the scope of this theorem. Inaccuracy of observation. All the imperfections of rationality that may lead us to process observations incorrectly. Non-common priors. Inability to articulate numerical priors. Inability to articulate our observations in numerical terms. The effort required may exceed our need for a resolution. Lack of good faith. Lack of common knowledge of our good faith.
Notice that these are all imperfections. The mathematical ideal remains. How to act in accordance with the eternal truths of mathematical theorems when we lack the means to satisfy their hypotheses is the theme of a large part of the Sequences.
This is what FAQs are for. On LW, The Sequences are our FAQ.
From the full text:
I don’t want confident public school bluffers.
*cough* Boris Johnson *cough*. But if that’s what you have to work with...