I think your hypothetical universe of infinite suffering is unsuited to the usual sort of utility calculations. For sure the following small variant on it is:
We have infinitely many people—call them −1, −2, −3, … -- living more or less equivalent lives of misery. We also have infinitely many people—call them +1, +2, +3, … -- living good lives. (The addition of infinitely many good lives is what distinguishes my hypothetical universe from yours, though you didn’t actually say there aren’t infinitely many good lives in yours to go with the infinitely many bad ones.)
Now Omega comes along and offers to take a million people and make their lives good. So now we have, let’s say, −1000001, −1000002, −1000003, … on the bad side and −1000000, …, −1 together with +1, +2, +3, … on the good side. Big improvement, right? Not necessarily, because here’s something else Omega could have done instead: just change the names by subtracting 1000000 from the numbers of all the minuses and of all the plusses from +1000001 onwards, and negating the numbers of +1 … +1000000. This yields exactly the same assignment of numbers to positive/negative, but no one actually got their life improved.
Situations with infinite utilities in them are really hard to reason about. It’s possible that there is no way to think about them that respects all our moral intuitions. (Or indeed our intuitions about preferences and rationality. These don’t need to be specifically moral questions.)
When the infinite utilities are on only one side—in your case we have infinite misery but not infinite happiness—you don’t get the exact same problems; I don’t see any way to produce results quite as crazy as the one I described—but my feeling is that there too we have no right to expect that our intuitions are all mutually consistent.
with surreal math ω+10000000 > ω. I am doubtful on whther the two possible options would actually end in the same situation. And in a way I don’t care about how people are label, I care abuot peoples lifes. It could just be that the relationship between labels for peoples and people amounts diverge. For example in the infinite realm cardinality and ordinality diverge while in the finite realm they coincicde. It could be that your proof shows that there is no ordinal improvement but what if the thing to be cared about behaves like a cardinality?
And we can as well scale down the numbers. Given that there are infinite bad lives and infinite good lives taking one person from bad to good seems like it ought to improve mattes but technically it seems it results in the same state.
I do wonder in that if I have the reals colored white for x>0 and black for 0<=x 0 chould be colored black. Now if I change the color of 0 to white I should be getting the result that now there is more white than previously and less black (even if relatively only infinidesimally so). Now you if you technically take the measure of the point 0 and the white portion of the line one might end up saying things like “measure of a single point is 0”. What I would say is that often measures are defined to be reals but here the measure would need to be surreal for it to accomodate questions about infinities. For example in normal form we could write a surreal with “a + b ω^1 + c ω^2 … ” with a,b,c being real factors. Then it woud dmake sense in that a finite countable collection would have b=0 but it doesn’t mean the whole sum is 0. While we can’t use any small positve real for b for dots I would still say that a point is “pointlike” and not nothing at all. A finite amount of points is not going to be more than no matter how short a line segment. But it doesn’t mean they don’t count at all. It just means that points “worth” is infinidesimal when compared to lines.
Yup. But I don’t see a good reason to apply it that way here, nor a good principled way of doing so that gives the results you want. I mean, how are you going to calculate the net utility in the “after” situation to arrive at ω+10000000 rather than ω?
no ordinal improvement [...] like a cardinality
It looks to me like it’s the other way around. Surreal integer arithmetic is much more like ordinal arithmetic than like cardinal arithmetic. That’s one reason why I’m skeptical about the prospects for applying it here: it seems like it might require imposing something like an ordering on the people whose utilities we’re aggregating.
the measure would need to be surreal
Despite my comments above, I do think it’s worth giving more consideration to using a richer number system for utilities.
[EDITED to fix an inconsequential and almost invisible typo.]
It does occur to me that while giving the people an order migth be suspicous utilities are a shorthand of preferences which are defined to be orders of preferring a over b. Therefore there is anyways going to be a conversion to ordinals so surreals should remain relevant.
I don’t think I’m convinced. Firstly, because in these cases where we’re looking at aggregating the interests of large collections of people it’s the people, not the possibilities, that seem like they need to be treated as ordered. Secondly, because having an ordering on preferences isn’t at all the same thing as wanting to use anything like ordinals for them. (E.g., cardinals are ordered too—well-ordered, even—at least if we assume the axiom of choice. The real numbers are ordered in the obvious way, but that’s not a well-ordering. Etc.)
I concede that what I’m saying is very hand-wavy. Maybe there really is a good way to make this sort of thing work well using surreal numbers as utilities. And (perhaps like you) I’ve thought for a long time that using something like the surreals for utilities might turn out to have advantages. I just don’t currently see an actual way to do it in this case.
Is there a practical difference between infinity and “large unknown unbounded number”? It still falls into the issue of “is the number large enough”, and if it’s not, you missed suffering you could’ve alleviated. So from an in-universe[1] viewpoint there is no reason not to state googol^googol, alphabetagammadeltaomega-quadtribillion[2], or the total length of of a all possible 4096-bit public keys, or some other horror that might make make reality crash.
[1] I’m assuming we’re still in the “timeless celestial beings” universe.
[2] I’m making stuff up.
But a large, unknown number could easily be some sort of infinity.
Let’s look at it another way. Say I choose some unknown number as you described. Any reason I couldn’t be enlightened by “well, if you had chosen number+1, you could have saved the universe”?
I definitely am lacking in my mathematical knowledge so if there’s a way to deal with un-measured numbers I’d appreciate if someone could enlighten me.
Then the best decision is to make some calculations, say, how much suffering per 1m/km2 on average, multiply that by how much of the universe you can observe, then add an incredibly large amount of 9s to it’s right side. Use all the excess utility to expand your space travel and observation and save the other planets from suffering.
In order to guarantee being able to deliver whatever utility change the player demands in the way you describe, Omega needs there to be an infinite amount of suffering to relieve.
[EDITED to add:] If whoever downvoted this would like to explain my error, I’d be interested. It looks OK to me. Or was it just Eugine/Azathoth/Ra/Lion punishing me for not being right-wing enough again?
I made no claim that those are the only two possibilities. But, for what it’s worth, here are the options I actually see. First, “legitimate” ones where someone read what I wrote, thought it was bad, and voted it down on that ground:
Perhaps I made a dumb mistake, or someone thought I did. (Not at all unlikely; I make mistakes, and so do other people, and downvoting something for being wrong is not uncommon behaviour on LW. Further, this is a discussion involving fiddly reasoning that it’s easy to get wrong, making it more likely that I’ve done something dumb and more likely that someone else wrongly thinks I have.)
Perhaps I was (or someone thought I was) pointlessly rude, or something of the kind. (I can’t see any reason why anyone would think that in this instance, though.)
Perhaps there is (or someone thinks there is) some other thing badly wrong with my comment. (I can’t think what.)
Then there are the options where the downvote was not on the basis of (actual or perceived) problems with the comment itself:
Perhaps someone downvoted it purely by mistake—finger-slip or whatever. That’s always possible, but I’ve seen no sign that this happens with non-negligible frequency on LW. (It happens fairly often on Hacker News, but their UI design puts the upvote and downvote arrows very close together and provides no way to correct accidental votes.)
Perhaps someone downvoted it purely at random. Also always possible, but it seems like a very odd thing to do and I’ve not encountered any evidence that that’s a thing that happens here. (Though perhaps I shouldn’t expect to have; it might be very hard to spot.)
Perhaps someone downvoted it for the sake of downvoting me: they dislike other things I’ve written, or have a personal grudge against me, or something. (I have had this happen multiple times before, often shortly after an exchange of comments with Eugine/Azathoth/Ra; in at least one case and I think more than one, a moderator has confirmed that Eugine/Azathoth/Ra has been downvoting substantial numbers of my comments in fairly rapid succession; Eugine/Azathoth/Ra has been observed behaving in a similar way towards other people, and has actually had three identities kicked of LW for such behaviour already. So the prior for this is not small. And, as it happens, I have recently been disagreeing elsewhere on LW with someone who shows signs of being the latest incarnation of Eugine/Azathoth/Ra.)
Of course it’s possible that this is the explanation but it isn’t Eugine/Azathoth/Ra. But I don’t know of any good evidence that there’s anyone else engaging in such behaviour, and in particular I haven’t noticed any obvious signs of anyone else doing it to me.
Perhaps someone downvoted it because they don’t like the topic: they think LW shouldn’t be discussing such things. (Unlikely in this case; this is a very typical LW topic, and I don’t see other comments in the discussion getting mysterious downvotes.)
Perhaps there was some other kind of reason that I haven’t thought of. (Always possible but, well, I haven’t thought of any plausible candidates.)
So. It looks to me like there are lots of low-probability explanations, plus “someone thinks I made a dumb mistake”, plus “Eugine/Azathoth/Ra wanted to downvote something I wrote”, both of which are things that have happened fairly often and good candidates for what’s happened here. And if someone thinks I made a dumb mistake, it seems like explaining what would be a good idea (whether the mistake is mine or theirs). Hence my comment.
(This feels like about two orders of magnitude more analysis than this trivial situation warrants, but no matter.)
I made no claim that those are the only two possibilities.
On reflection, I see that you’re right; I inferred too much from your comment. What you said was that you’d be interested in an explanation of your error, if and only if you committed one; followed by asking the separate, largely independent question of whether Eugine/Azathoth/Ra/Lion was punishing you for not being right-wing enough again. I erroneously read your comment as saying that you’d be interested in (1) an explanation of your error or (2) the absence of such an explanation, which would prove the Eugine hypothesis by elimination. Sorry for jumping the gun and forcing you into a bunch of unnecessary analysis.
Indeed I was not claiming that the absence of an explanation would prove it was Eugine. It might simply mean that whoever downvoted me didn’t read what I wrote, or that for whatever reason they didn’t think it would be worth their while to explain. Or the actual reason for the downvote could be one of those low-probability ones.
One correction, though: I would be interested in an explanation of my error if and only if whoever downvoted me thinks I committed one. Even if in fact I didn’t, it would be good to know if I failed to communicate clearly, and good for them to discover their error.
And now I shall drop the subject. (Unless someone does in fact indicate that they downvoted me for making a mistake and some sort of correction or clarification seems useful.)
Ah, I hadn’t taken in that the person complaining rudely that I hadn’t considered all the possibilities for why I got downvoted might be the person who downvoted me. In retrospect, I should have.
Anyway (and with some trepidation since I don’t much relish getting into an argument with someone who may possibly just take satisfaction in causing petty harm): no, it doesn’t look to me as if casebash’s arguments are much like 2+2=5, nor do I think my comments are as obvious as pointing out that actually it’s 4. The sort of expected-utility-maximizing that’s generally taken around these parts to be the heart of rationality really does have difficulties in the presence of infinities, and that does seem like it’s potentially a problem, and whether or not casebash’s specific objections are right they are certainly pointing in the direction of something that could use more thought.
I do not think I have ever encountered any case in which deliberately making a problem worse to draw attention to it has actually been beneficial overall. (There are some kinda-analogous things in realms other than human affairs, such as vaccination, or deliberately starting small forest fires to prevent bigger ones, but the analogy isn’t very close.)
If indeed LW has become irredeemably shit, then amplifying the problem won’t fix it (see: definition of “irredeemably”) so you might as well just fuck off and do something less pointless with your time. If it’s become redeemably shit, adding more shit seems unlikely to be the best way of redeeming it so again I warmly encourage you to do something less useless instead. But these things seem so obvious—dare I say it, so much like pointing out that 2+2=4? -- that I wonder whether, deep down, under the trollish exterior, there lurks a hankering for something better. Come to the Light Side! We have cookies.
I think your hypothetical universe of infinite suffering is unsuited to the usual sort of utility calculations. For sure the following small variant on it is:
We have infinitely many people—call them −1, −2, −3, … -- living more or less equivalent lives of misery. We also have infinitely many people—call them +1, +2, +3, … -- living good lives. (The addition of infinitely many good lives is what distinguishes my hypothetical universe from yours, though you didn’t actually say there aren’t infinitely many good lives in yours to go with the infinitely many bad ones.)
Now Omega comes along and offers to take a million people and make their lives good. So now we have, let’s say, −1000001, −1000002, −1000003, … on the bad side and −1000000, …, −1 together with +1, +2, +3, … on the good side. Big improvement, right? Not necessarily, because here’s something else Omega could have done instead: just change the names by subtracting 1000000 from the numbers of all the minuses and of all the plusses from +1000001 onwards, and negating the numbers of +1 … +1000000. This yields exactly the same assignment of numbers to positive/negative, but no one actually got their life improved.
Situations with infinite utilities in them are really hard to reason about. It’s possible that there is no way to think about them that respects all our moral intuitions. (Or indeed our intuitions about preferences and rationality. These don’t need to be specifically moral questions.)
When the infinite utilities are on only one side—in your case we have infinite misery but not infinite happiness—you don’t get the exact same problems; I don’t see any way to produce results quite as crazy as the one I described—but my feeling is that there too we have no right to expect that our intuitions are all mutually consistent.
with surreal math ω+10000000 > ω. I am doubtful on whther the two possible options would actually end in the same situation. And in a way I don’t care about how people are label, I care abuot peoples lifes. It could just be that the relationship between labels for peoples and people amounts diverge. For example in the infinite realm cardinality and ordinality diverge while in the finite realm they coincicde. It could be that your proof shows that there is no ordinal improvement but what if the thing to be cared about behaves like a cardinality?
And we can as well scale down the numbers. Given that there are infinite bad lives and infinite good lives taking one person from bad to good seems like it ought to improve mattes but technically it seems it results in the same state. I do wonder in that if I have the reals colored white for x>0 and black for 0<=x 0 chould be colored black. Now if I change the color of 0 to white I should be getting the result that now there is more white than previously and less black (even if relatively only infinidesimally so). Now you if you technically take the measure of the point 0 and the white portion of the line one might end up saying things like “measure of a single point is 0”. What I would say is that often measures are defined to be reals but here the measure would need to be surreal for it to accomodate questions about infinities. For example in normal form we could write a surreal with “a + b ω^1 + c ω^2 … ” with a,b,c being real factors. Then it woud dmake sense in that a finite countable collection would have b=0 but it doesn’t mean the whole sum is 0. While we can’t use any small positve real for b for dots I would still say that a point is “pointlike” and not nothing at all. A finite amount of points is not going to be more than no matter how short a line segment. But it doesn’t mean they don’t count at all. It just means that points “worth” is infinidesimal when compared to lines.
Yup. But I don’t see a good reason to apply it that way here, nor a good principled way of doing so that gives the results you want. I mean, how are you going to calculate the net utility in the “after” situation to arrive at ω+10000000 rather than ω?
It looks to me like it’s the other way around. Surreal integer arithmetic is much more like ordinal arithmetic than like cardinal arithmetic. That’s one reason why I’m skeptical about the prospects for applying it here: it seems like it might require imposing something like an ordering on the people whose utilities we’re aggregating.
Despite my comments above, I do think it’s worth giving more consideration to using a richer number system for utilities.
[EDITED to fix an inconsequential and almost invisible typo.]
It does occur to me that while giving the people an order migth be suspicous utilities are a shorthand of preferences which are defined to be orders of preferring a over b. Therefore there is anyways going to be a conversion to ordinals so surreals should remain relevant.
I don’t think I’m convinced. Firstly, because in these cases where we’re looking at aggregating the interests of large collections of people it’s the people, not the possibilities, that seem like they need to be treated as ordered. Secondly, because having an ordering on preferences isn’t at all the same thing as wanting to use anything like ordinals for them. (E.g., cardinals are ordered too—well-ordered, even—at least if we assume the axiom of choice. The real numbers are ordered in the obvious way, but that’s not a well-ordering. Etc.)
I concede that what I’m saying is very hand-wavy. Maybe there really is a good way to make this sort of thing work well using surreal numbers as utilities. And (perhaps like you) I’ve thought for a long time that using something like the surreals for utilities might turn out to have advantages. I just don’t currently see an actual way to do it in this case.
Well, we don’t actually need infinite suffering, just a large unknown unbounded number.
Is there a practical difference between infinity and “large unknown unbounded number”? It still falls into the issue of “is the number large enough”, and if it’s not, you missed suffering you could’ve alleviated. So from an in-universe[1] viewpoint there is no reason not to state googol^googol, alphabetagammadeltaomega-quadtribillion[2], or the total length of of a all possible 4096-bit public keys, or some other horror that might make make reality crash.
[1] I’m assuming we’re still in the “timeless celestial beings” universe. [2] I’m making stuff up.
If you create an actual infinity then things get weird. Many intuitive rules don’t hold. So I don’t want an actual infinity.
But a large, unknown number could easily be some sort of infinity.
Let’s look at it another way. Say I choose some unknown number as you described. Any reason I couldn’t be enlightened by “well, if you had chosen number+1, you could have saved the universe”?
I definitely am lacking in my mathematical knowledge so if there’s a way to deal with un-measured numbers I’d appreciate if someone could enlighten me.
“But a large, unknown number could easily be some sort of infinity.”—it could if I hadn’t specified that we are assuming it is finite.
Then the best decision is to make some calculations, say, how much suffering per 1m/km2 on average, multiply that by how much of the universe you can observe, then add an incredibly large amount of 9s to it’s right side. Use all the excess utility to expand your space travel and observation and save the other planets from suffering.
In order to guarantee being able to deliver whatever utility change the player demands in the way you describe, Omega needs there to be an infinite amount of suffering to relieve.
[EDITED to add:] If whoever downvoted this would like to explain my error, I’d be interested. It looks OK to me. Or was it just Eugine/Azathoth/Ra/Lion punishing me for not being right-wing enough again?
Not me.
(retracted)
I made no claim that those are the only two possibilities. But, for what it’s worth, here are the options I actually see. First, “legitimate” ones where someone read what I wrote, thought it was bad, and voted it down on that ground:
Perhaps I made a dumb mistake, or someone thought I did. (Not at all unlikely; I make mistakes, and so do other people, and downvoting something for being wrong is not uncommon behaviour on LW. Further, this is a discussion involving fiddly reasoning that it’s easy to get wrong, making it more likely that I’ve done something dumb and more likely that someone else wrongly thinks I have.)
Perhaps I was (or someone thought I was) pointlessly rude, or something of the kind. (I can’t see any reason why anyone would think that in this instance, though.)
Perhaps there is (or someone thinks there is) some other thing badly wrong with my comment. (I can’t think what.)
Then there are the options where the downvote was not on the basis of (actual or perceived) problems with the comment itself:
Perhaps someone downvoted it purely by mistake—finger-slip or whatever. That’s always possible, but I’ve seen no sign that this happens with non-negligible frequency on LW. (It happens fairly often on Hacker News, but their UI design puts the upvote and downvote arrows very close together and provides no way to correct accidental votes.)
Perhaps someone downvoted it purely at random. Also always possible, but it seems like a very odd thing to do and I’ve not encountered any evidence that that’s a thing that happens here. (Though perhaps I shouldn’t expect to have; it might be very hard to spot.)
Perhaps someone downvoted it for the sake of downvoting me: they dislike other things I’ve written, or have a personal grudge against me, or something. (I have had this happen multiple times before, often shortly after an exchange of comments with Eugine/Azathoth/Ra; in at least one case and I think more than one, a moderator has confirmed that Eugine/Azathoth/Ra has been downvoting substantial numbers of my comments in fairly rapid succession; Eugine/Azathoth/Ra has been observed behaving in a similar way towards other people, and has actually had three identities kicked of LW for such behaviour already. So the prior for this is not small. And, as it happens, I have recently been disagreeing elsewhere on LW with someone who shows signs of being the latest incarnation of Eugine/Azathoth/Ra.)
Of course it’s possible that this is the explanation but it isn’t Eugine/Azathoth/Ra. But I don’t know of any good evidence that there’s anyone else engaging in such behaviour, and in particular I haven’t noticed any obvious signs of anyone else doing it to me.
Perhaps someone downvoted it because they don’t like the topic: they think LW shouldn’t be discussing such things. (Unlikely in this case; this is a very typical LW topic, and I don’t see other comments in the discussion getting mysterious downvotes.)
Perhaps there was some other kind of reason that I haven’t thought of. (Always possible but, well, I haven’t thought of any plausible candidates.)
So. It looks to me like there are lots of low-probability explanations, plus “someone thinks I made a dumb mistake”, plus “Eugine/Azathoth/Ra wanted to downvote something I wrote”, both of which are things that have happened fairly often and good candidates for what’s happened here. And if someone thinks I made a dumb mistake, it seems like explaining what would be a good idea (whether the mistake is mine or theirs). Hence my comment.
(This feels like about two orders of magnitude more analysis than this trivial situation warrants, but no matter.)
On reflection, I see that you’re right; I inferred too much from your comment. What you said was that you’d be interested in an explanation of your error, if and only if you committed one; followed by asking the separate, largely independent question of whether Eugine/Azathoth/Ra/Lion was punishing you for not being right-wing enough again. I erroneously read your comment as saying that you’d be interested in (1) an explanation of your error or (2) the absence of such an explanation, which would prove the Eugine hypothesis by elimination. Sorry for jumping the gun and forcing you into a bunch of unnecessary analysis.
No problem.
Indeed I was not claiming that the absence of an explanation would prove it was Eugine. It might simply mean that whoever downvoted me didn’t read what I wrote, or that for whatever reason they didn’t think it would be worth their while to explain. Or the actual reason for the downvote could be one of those low-probability ones.
One correction, though: I would be interested in an explanation of my error if and only if whoever downvoted me thinks I committed one. Even if in fact I didn’t, it would be good to know if I failed to communicate clearly, and good for them to discover their error.
And now I shall drop the subject. (Unless someone does in fact indicate that they downvoted me for making a mistake and some sort of correction or clarification seems useful.)
(retracted)
Ah, I hadn’t taken in that the person complaining rudely that I hadn’t considered all the possibilities for why I got downvoted might be the person who downvoted me. In retrospect, I should have.
Anyway (and with some trepidation since I don’t much relish getting into an argument with someone who may possibly just take satisfaction in causing petty harm): no, it doesn’t look to me as if casebash’s arguments are much like 2+2=5, nor do I think my comments are as obvious as pointing out that actually it’s 4. The sort of expected-utility-maximizing that’s generally taken around these parts to be the heart of rationality really does have difficulties in the presence of infinities, and that does seem like it’s potentially a problem, and whether or not casebash’s specific objections are right they are certainly pointing in the direction of something that could use more thought.
I do not think I have ever encountered any case in which deliberately making a problem worse to draw attention to it has actually been beneficial overall. (There are some kinda-analogous things in realms other than human affairs, such as vaccination, or deliberately starting small forest fires to prevent bigger ones, but the analogy isn’t very close.)
If indeed LW has become irredeemably shit, then amplifying the problem won’t fix it (see: definition of “irredeemably”) so you might as well just fuck off and do something less pointless with your time. If it’s become redeemably shit, adding more shit seems unlikely to be the best way of redeeming it so again I warmly encourage you to do something less useless instead. But these things seem so obvious—dare I say it, so much like pointing out that 2+2=4? -- that I wonder whether, deep down, under the trollish exterior, there lurks a hankering for something better. Come to the Light Side! We have cookies.
I’ll let the rest of your comment stand, but:
2 + 2 = 2 + 2 + 0. A number subtracted from itself equals 0, and infinity is a number, so 2 + 2 = 2 + 2 + ∞ - ∞. Infinity plus one is still infinity, so 2 + 2 = 2 + 2 + (∞ + 1) - ∞ = 2 + 2 + (1 + ∞) - ∞ = (2 + 2 + 1) + (∞ - ∞) = 2 + 2 + 1 + 0 = 5. So nyah.
(Sorry, casebash, but it was too good a setup to ignore.)
LOL.
(retracted)
I can has raisin?