I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
I’m Harry Altman. I do strange sorts of math.
Posts I’d recommend:
A summary of Savage’s foundations for probability and utility—if the arguments used to ground probability and utility seem circular to you, here’s a non-circular way of doing it.
The initial example is inane. I believe XKCD has an appropriate comment on the matter.
In more detail: While strictly speaking “Frodo Baggins” may not be a unique identifier, there is a fictional person people will understand it to mean without additional context; as such, without that additional context, it must be taken as the usual meaning of “Frodo Baggins”. Hence we get it “wrong” only because you used the name “Frodo Baggins” with a meaning other than its usual one. (Analogous to the comic.)
If you did something to indicate that it need not be the usual Frodo Baggins—for instance, opening it with “Someone named Frodo Baggins”, and the effect still (or rather actually) occurred, that would be an example.
Edit: Oh, hell. Just took a look at the references. I had actually thought this was intended seriously...
Took the survey.
Almost nothing in this post is correct. This post displays not just a misuse of and failure to understand surreal numbers, but a failure to understand cardinals, ordinals, free groups, lots of other things, and just how to think about such matters generally, much as in our last exchange. The fact that (as I write this) this is sitting at +32 is an embarrassment to this website. You really, really, need to go back and relearn all of this from scratch, because going by this post you don’t have the slightest clue what you’re talking about. I would encourage everyone else to stop upvoting this crap.
This whole post is just throwing words around and making assertions that assume things generalize in a particular naïve way that you expect. Well, they don’t, and certainly not obviously.
Really, the whole idea here is wrong. The fact that something does not extend to infinities or infinitesimals is not somehow a paradox. Many things don’t extend. There’s nothing wrong with that. Some things, of course, do extend if you do things properly. Some things extend in more than one way, with none of them being more natural than the others! But if something doesnt extend, it doesn’t extend. That’s not a paradox.
Similarly, the fact that something has unexpected results is not a paradox. The right solution for some of these is just to actually formalize them and accept the results. No further “resolution” is required.
In the hopes of making my point absolutely clear, I am going to take these one by one. ~(As per the bullshit asymmetry principle, I’m afraid my response will be much longer than the original post.)~ (OK, I guess that turned out not to be true.) Those that involve philosophical problems in addition to just mathematical problems I will skip on my first pass, if you don’t mind (well, some of them, anyway; and I may have slightly misjudged some of the ones I skipped, because, well, I skipped them—point is I’m skipping some, it hardly matters, the rest are enough to demonstrate the point, but maybe I will get back to the skipped ones later). Note that I’m going to focus on problems involving infinities somehow; if there are problems not involving infinities I’ll likely miss them.
Infinitarian paralysis: Skipping for now due to philosophical problems in addition to mathematical ones.
Paradox of the gods: You haven’t stated your setup here formally, but if I try to formalize it (using real numbers as is probably appropriate here) I come to the conclusion that yes, the man cannot leave the starting point. Is this a “paradox”? No, it’s just what you get if you actually formalize this. The continuum is counterintuitive! It doesn’t quite fit our usual notions of causality! Think about differential equations for a moment—is it a “paradox” that some differential equations have nonunique solutions, even though it seems that a particle’s position, velocity, and relation between the two ought to “cause” its future trajectory? No! This is the same sort of thing; continuous time and continuous space do not work like discrete time and discrete space.
But in addition to your “resolution” being unnecessary, it’s also nonsensical. You’re taking the number of gods as a surreal number. That’s nonsense. Surreal numbers are not for counting how many of something there are. Are you trying to map cardinals to surreals? I mean, yeah, you could define such a map, it’s easy to do with AC, but is it meaningful? Not really. You do not count numbers of things with surreals, as you seem to be suggesting.
Of course, there’s more than one way to measure the size of an infinite set, not just cardinality. Since you translate the number into a surreal, perhaps you meant the set of gods to be reverse well-ordered, so that you can talk about its reverse order type, as an ordinal, and take that as a surreal? That would go a little way to making this less nonsensical, but, well, you never said any such thing.
Of course, your solution seems to involve implicitly changing the setting to have surreal-valued time and space, but that makes sense—it does make sense to try to make such “paradoxes” make more sense by extending the domain you’re talking about. You might want to make more of an explicit note of it, though. Anyway, let’s get back to nonsense.
So let’s say we accept this reverse-well-ordering hypothesis. Does your “resolution” follow? Does it even make sense? No to both! First, your “resolution” isn’t so much a deduction as a new assumption—that these reverse-well-ordered gods are placed at positions 1/2^α for ordinals α. I mean, I guess that’s a sensible extension of the setup, but… let’s note here that you actually are changing the setup significantly at this point; the original setup pretty clearly had ω gods, not more. But, OK, that’s fine—you’re generalizing, from the case of ω to the case of more. You should be more explicit that you’re doing that, but I guess that’s not wrong.
But your conclusion still is wrong. Why? Several reasons. Let’s focus on the case of ω-many gods, that the original setup describes. You say that the man is stopped at 1/2^ω. Question: Why? Is 1/2^ω the minimum of the set {1/2^n : n ∈ N } inside the surreals? Well, obviously not, because that set obviously has no smallest element.
But is it the infimum (or equivalently limit), then, inside the surreals, if not the minimum? Actually, let’s put that question aside for now and note that the answer to this question is actually irrelevant! Because if you accept the logic that the infimum (or equivalently limit) controls, then, guess what, you already have your resolution to the paradox back in the real numbers, where there’s it’s an infimum and it’s 0. So all the rest of this is irrelevant.
But let’s go on—is it the infimum (or equivalently limit)? No! It’s not! Because there is no infimum! A subsets of the surreals with no minimum also has no infimum, always, unconditionally! The surreal numbers are not at all like the real numbers. You basically can’t do limits there, as we’ve already discussed. So there’s nothing particularly distinguishing about the point 1/2^ω, no particular reason why that’s where the man would stop. (There’s no god there! We’re talking about the case of ω gods, not ω+1 gods.)
We haven’t even asked the question of what you mean by 2^s for a surreal s. I’m going to assume, since you’re talking about surreals and didn’t specify otherwise, that you mean exp(s log 2), using the usual surreal exponential. But, since you’re only concerned with the case where s is an ordinal, maybe you actually meant taking 2^s using ordinal exponentiation, and then taking the reciprocal as a surreal. These are different, I hope you realize that!
What about if we use {left set|right set} intead of limits and infima? Well, there’s now even less reason to believe that such a point has any relevance to this problem, but let’s make a note of what we get. What is {|1, 1⁄2, 1⁄4, …}? Well, it’s 0, duh. OK, what if we exclude that by asking for {0|1, 1⁄2, 1⁄4, …} instead? That’s 1/ω. This isn’t 1/2^ω; it’s larger—well, unless you meant “use ordinal exponentiation and then invert”, in which case it is indeed equal and you need to be a hell of a lot clearer but it’s all still irrelevant to anything. (Using ordinal exponentiation, 2^ω = ω; while using the surreal exponential, 2^ω = ω^(ω log 2) > ω.)
(What if we use sign-sequence limits, FWIW? That’ll still get us 1/ω. You really shouldn’t use those though.)
Anyway, in short, your resolution makes no sense. Moving on...
Two envelopes paradox: OK, I’m ignoring all the parts that don’t have to do with surreals, including the use of an improper prior (aka not a probability distribution); I’m just going to examine the use of surreals.
Please. Explain. How, on earth, does one put a uniform distribution on an interval of surreal numbers?
So, if we look at the interval from 0 to 1, say, then the probability of picking a number between a and b, for a<b, is b-a? For surreal a and b?
So, first off, that’s not a probability. Probabilities are real, for very good reason. This is explicitly a decision-theory context, so don’t tell me that doesn’t apply!
But OK. Let’s accept the premise that you’re using a surreal-valued probability measure instead of a real one. Except, wait, how is that going to work? How is countable additivity going to work, for instance? We’ve already established that infinite sums do not (in general) work in the surreals! (See earlier discussion.) But OK, we can ignore that—hell, Savage’s theorem doesn’t guarantee countable additivity, so let’s just accept finite additivity. There is the question of just how you’re going to define this in generality—it takes quite a bit of work to extend Jordan “measure” into Lebesgue measure, you know—but you’re basically just using intervals so I’ll accept we can just treat that part naïvely.
But now you’re taking expected values! Of a surreal-valued probability distribution over the surreals! So basically you’re having to integrate a surreal-valued function over the surreals. As I’ve mentioned before, there is no known theory of this, no known general way to define this. I suppose since you’re just dealing with step functions we can treat this naïvely, but ugh. Nothing you’re doing is really defined. This is pure “just go with it, OK?” This one is less bad than the previous one, this one contains things one can potentially just go with, but you don’t seem to realize that the things you’re doing aren’t actually defined, that this is naïve heuristic reasoning rather than actual properly-founded mathematics.
Sphere of suffering: Skipping for now due to philosophical problems in addition to mathematical ones.
Hilbert Hotel: So, first off, there’s no paradox here. This sort of basic cardinal arithmetic of countable sets is well-understood. Yes, it’s counterintuitive. That’s not a paradox.
But let’s examine your resolution, because, again, it makes no sense. First, you talk about there being n rooms, where n is a surreal number. Again: You cannot measure sizes of sets with surreal numbers! That is meaningless!
But let’s be generous and suppose you’re talking about well-ordered sets, and you’re measuring their size with ordinals, since those embed in the surreals. As you note, this is changing the problem, but let’s go with it anyway. Guess what—you’ve still described it wrong! If you have ω rooms, there is no last room. The last room isn’t room ω, that’d be if you had ω+1 rooms. Having ω rooms is the original Hilbert Hotel with no modification.
I’m assuming when you say n/2 you mean that in the surreal sense. OK. Let’s go back to the original problem and say n=ω. Then n/2 is ω/2, which is still bigger than any natural number, so there’s still nobody in the “last half” of rooms! What if n=ω+1, instead? Then ω/2+1/2 is still bigger than any natural number, so your “last half” consists only of ω+1 -- it’s not of the same cardinality as your “first half”. Is that what you intended?
But ultimately… even ignoring all these problems… I don’t understand how any of this is supposed to “resolve” any paradoxes. It resolves it by making it impossible to add more people? Um, OK. I don’t see why we should want that.
But it doesn’t even succeed at that! Because if you have [Dedekind-]infinitely many, then for adding finitely many, you have that initial ω, so you can just perform your alterations on that and leave the rest alone. You haven’t prevented the Hilbert Hotel “paradox” at all! And for doubling, well, assuming well-ordering (because you’re measuring sizes with ordinals, maybe?? or because we’re assuming choice) well, you can partition things into copies of ω and go from there.
Galileo’s paradox: Skipping this one as I have nothing more to add on this subject, really.
Bacon’s puzzle: This one, having nothing to do with surreals, is completely correct! It’s not new, but it’s correct, and it’s neat to know about, so that’s good. (Although I have to wonder: Why is it on this one you accept conventional mathematics of the infinite, instead of objecting that it’s a “paradox” and trying to shoehorn in surreals?)
Trumped and the St. Petersburg ones: Skipping for now due to philosophical problems in addition to mathematical ones
Dice-room murders: An infinitesimal chance the die never comes up 10? No, there’s a 0 chance. That’s how probability theory works. Again, probability is real-valued for very good reasons, and reals don’t have infinitesimals. If you want to introduce probabilities valued over some other codomain, you’re going to have to specify what and explain how it’s going to work. “Infinitesimal” is not very specific.
The rest as you say has nothing to do with infinities and seems correct so I’ll ignore it.
Ross-Littlewood paradox: Er… you haven’t resolved this one at all? The conventional answer, FWIW, is that you should take the limit of the sets, not the limit of the cardinalities, so that none are left, and this demonstrates the discontinuity of cardinality. But, um, you just haven’t answered this one? I mean I guess that’s not wrong as such...
Soccer teams: Your resolution bears little resemblance to the original problem. You initially postulated that the set of abilities was Z, then in your resolution you said it was an interval in the surreals. Z is not an interval in the surreals. In fact, no set is an interval in the surreals; between any two given surreals there is a whole proper class of surreals. Perhaps you meant in the omnific integers? Sorry, Z isn’t an interval in there either. Perhaps you meant in something of your own invention? Well, you didn’t describe it. Ultimately it’s irrelevant—because the fact is that, yes, if you add 1 to each element of Z, you get Z. No alternate way of describing it will change that.
Positive soccer teams: You, uh, once again didn’t supply a resolution? In any case this whole problem is ill-defined since you didn’t actually specify any way to measure which of two teams is better. Although, if we just assume there is some way, then presumably we want it to be a preorder (since teams can be tied), and then it seems pretty clear that the two teams should be tied (because each should be no greater than the other for the two reasons you gave). (Actually it’s not too hard to come up with an actual preorder here that does what you want, and then you can verify that, yup, the two teams are tied in it.) This happens a lot with infinities—things that are orders in the finite case become preorders. Just something you have to learn to live with, once again.
Can God pick an integer at random?: This is… not how probability works. There is no uniform probability distribution on the natural numbers, by countable additivity. Or, in short, no, God cannot pick an integer at random. You then go on to talk about nonsensical 1/∞ chances. In short, the only paradox here is due to a nonsensical setup.
But then you go and give it a nonsensical resolution, too. So, first off, once again, you can’t count things with surreals. I will once again generously assume that you intended there to be a well-ordered set of planets and are counting with ordinals rather than surreals.
It doesn’t matter. Not only do you then fail to reject the nonsensical setup, you do the most nonsensical thing yet: You explicitly mix surreal numbers with extended real numbers, and attempt to compare the two. What. Are you implicitly thinking of ∞ as ω here? Because you sure didn’t say anything like that! Seriously, these don’t go together.
I am tempted to do the formal manipulations to see if there is any way one might come to your conclusions by such meaningless formal manipulation, but I’ll just give you the benefit of the doubt there, because I don’t want to give myself a headache doing meaningless formal manipulations involving two different number systems that can’t be meaningfully combined.
Banach-Tarski paradox: This starts out as a decent explanation of Banach-Tarski; it’s missing some important details, but whatever. But then you start talking about sequences of infinite length. (Something that wasn’t there before—you act as if this was already there, but it wasn’t.) Which once again you meaninglessly assign a surreal length. I’ll once again assume you meant an ordinal length instead. Except that doesn’t help much because this whole thing is meaningless—you can’t take infinite products in groups.
Or maybe you can, in this case, since we’re really working in F_2 embedded in SO(3), rather than just in F_2? So you could take the limit in SO(3), if it exists. (SO(3) is compact, so there will certainly be at least one limit point, but I don’t see any obvious reason it’d be unique.)
Except the way you talk about it, you talk as if these infinite sequences are still in our free group. Which, no. That is not how free groups work. They contain finite words only.
Maybe you’re intending this to be in some sort of “free topological group”, which does contain infinite and transfinite words? Yeah, there’s no such thing in any nontrivial manner. Because if you have any element g, then you can observe that g(ggg...) = ggg..., and therefore (because this is a group) ggg...=1. Well, OK, that’s not a full argument, I’ll admit. But, that’s just a quick example of how this doesn’t work, I hope you don’t mind. Point is: You haven’t defined this new setting you’re working in, and if you try, you’ll find it makes no sense. But it sure as hell ain’t the free group F_2.
I also have no idea what you’re saying this does to the Banach-Tarski paradox. Honestly, it doesn’t matter, because the logic behind Banach-Tarski remains the same regardless.
The headache: Skipping for now
The magic dartboard: No, a bijection between the countable ordinals and [0,1] is not known to exist. That’s only true if you assume the continuum hypothesis. Are you assuming the continuum hypothesis? You didn’t mention any such thing.
You then give a completely wrong and nonsensical argument as to why this construction has the desired “magic dartboard” property, in which you talk about certain ordinals being in the “first 1/n” of the countable ordinals, or the “last half” of the countable ordinals. This is completely meaningless. There is no first 1/n, or last half, of the countable ordinals. If you had some meaning in mind, you’re going to have to explain it. And if you mean going into the surreals and comparing them against ω_1/n, then, unsurprisingly, the entire countable ordinals will always fall in your first 1/n. The construction does yield a magic dartboard, but you’re completely wrong as to why.
Thomson’s lamp: Your resolution here is nonsense. Now, our presses our occurring in a well-ordered sequence, so it’s most appropriate to regard the number of presses as an ordinal. In which case, the number of presses is ω. It’s not a question—that’s what it is. It doesn’t depend on how we define the reals, WTF? The reals are the reals (unless you’re going to start doing constructive mathematics, in which case the things you wrote will presumably be wrong in many more ways). It might depend on how you define the problem, but you were pretty explicit about what the press timings are. Anyway, ω is even as an omnific integer, but does that mean we should consider the lamp to be on? I see no reason to conclude this. The lamp’s state has no well-defined limit, after all. This is once again naïvely extending something from the finite to the infinite without checking whether it actually extends (it doesn’t).
Really, the basic mistake here is assuming there must be an answer. As I said, the lamp’s state has no limit, so there really just isn’t any well-defined answer to this problem.
Grandi’s series: You once again assign a variable surreal length (which still makes no sense) to something which has a very definite length, namely ω. In any case, Grandi’s series has no limit. You say it depends on whether the length is even or odd. Suppose we interpret that as “even or odd as an omnific integer” (i.e. having even or odd finite part). OK. So you’re saying that Grandi’s series sums to 0, then, since ω is even as an omnific integer? It doesn’t matter; the series has no limit, and if you tried to extend it transfinitely, you’d get stuck at ω when there’s already no limit there.
I mean, I suppose you could define a new notion of what it means to sum a divergent (possibly transfinite series), and apply it to Grandi’s series (possibly extended transfinitely) as an example, but you haven’t done that. You’ve just said what the limit “is”. It isn’t. More naïve extension and formal manipulation in place of actual mathematical reasoning.
Satan’s apple: Skipping, you didn’t mention surreals and the paradox is entirely philosophical rather than mathematical (you also admitted confusion on this one rather than giving a fake resolution, so good for you)
Gabriel’s horn: Yup, you described this one correctly at least!
Bertrand paradox: You almost had this, but still snuck in an incorrect statement revealing a serious conceptual error. There aren’t multiple sets of chords; there are multiple probability distributions on the set of chords. Really, it’s not that all the probabilities are valid, it’s just that it depends on how you pick, but I was giving you the benefit of the doubt on that one until you added that bit about multiple sets of chords.
Zeno’s paradoxes: We can argue all we like about the “real” resolution here philosophically but whatever, you seem to grasp the mathematics of it at least, so let’s move on
Skolem’s paradox: You’ve mostly summed this one up correctly. I must nitpick and point out that membership in the model is not necessarily the same as membership outside the model even for those sets that are in the model—something which you might realize but your explanation doesn’t make clear—but this is a small error compared to the giant conceptual errors that fill most of what you’ve written here.
Whew. OK. I will maybe get back to the ones I skipped, but probably not because this is enough to demonstrate my point. This post is horribly wrong nearly in its entirely, shot through with serious conceptual errors. You really need to relearn this stuff from scratch, because almost nothing you’re saying makes sense. I urge everyone else to ignore this post and not take anything it says as reliable.
I don’t think this follows. I do not see how degree of wrongness implies intent. Eliezer’s comment rhetorically suggests intent (“trolling”) as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.
I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.
Is it contrary to everything Eliezer’s ever written? Sure! But reading the entirety of the Sequences, calling yourself a “rationalist”, does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.
I think we can only infer intent like you’re talking about if the person in question is, actually, y’know, thinking about what they’re doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a “rationalist” is supposed to do, it’s still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.
I do worry about “ends justify the means” reasoning when evaluating whether a person or project was or wasn’t “good for the world” or “worth supporting”. This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake—and also believes the project is doing something new/experimental that current civilization is inadequate for—there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of “high-demand groups” (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.
There is (roughly) a sequences post for that. :P
Later edits: various edits for clarity; also the “transfinite sequences suffice” thing is easy to verify, it doesn’t require some exotic theorem
Yet later edit: Added another example
Two weeks later edit: Added the part about sign-sequence limits
So, to a large extent this is a problem with non-Archimedean ordered fields in general; the surreals just exacerbate it. So let’s go through this in stages.
===Stage 1: Infinitesimals break limits===
Let’s start with an example. In the real numbers, the limit as n goes to infinity of 1/n is 0. (Here n is a natural number, to be clear.)
If we introduce infinitesimals—even just as minimally as, say, passing to R(ω) -- that’s not so, because if you have some infinitesimal ε, the sequence will not get within ε of 0.
Of course, that’s not necessarily a problem; I mean, that’s just restating that our ordered field is no longer Archimedean, right? Of course 1/n is no longer going to go to 0, but is 1/n really the right thing to be looking at? How about, say, 1/x, as x goes to infinity, where x takes values in this field of ours? That still goes to 0. So it may seem like things are fine, like we just need to get these sequences out of our head and make sure we’re always taking limits of functions, not sequences.
But that’s not always so easy to do. What if we look at x^n, where |x|<1? If x isn’t infinitesimal, that’s no longer going to go to 0. It may still go to 0 in some cases—like, in R(ω), certainly 1/ω^n will still go to 0 -- but 1/2^n sure won’t. And what do we replace that with? 1/2^x? How do we define that? In certain settings we may be able to—hell, there’s a theory of the surreal exponential, so in the surreals we can—but not in general. And doing that requires first inventing the surreal exponential, which—well, I’ll talk more about that later, but, hey, let’s talk about that a bit right now. How are we going to define the exponential? Normally we define exp(x) to be the limit of 1, 1+x, 1+x+x^2/2… but that’s not going to work anymore. If we try to take exp(1), expecting an answer of e, what we get is that the sequence doesn’t converge due to the cloud of infinitesimals surrounding it; it’ll never get within 1/ω of e. For some values maybe it’ll converge, but not enough to do what we want.
Now the exponential is nice, so maybe we can find another definition (and, as mentioned, in the case of the surreals indeed we can, while obviously in the case of the hyperreals we can do it componentwise). But other cases can be much worse. Introducing infinitesimals doesn’t break limits entirely—but it likely breaks the limits that you’re counting on, and that can be fatal on its own.
===Stage 2: Uncountable cofinality breaks limits harder===
Stage 2 is really just a slight elaboration of stage 1. Once your field is large enough to have uncountable cofinality—like, say, the hyperreals—no sequence (with domain the whole numbers) will converge (unless it’s eventually constant). If you want to take limits, you’ll need transfinite sequences of uncountable length, or you simply will not get convergence.
Again, when you can rephrase things from sequences (with domain the natural numbers) to functions (with domain your field), things are fine. Because obviously your field’s cofinality is equal to itself. But you can’t always do that, or at least not so easily. Again: It would be nice if, for |x|<1, we had x^n approaching 0, and once we hit uncountable cofinality, that is simply not going to happen for any nonzero x.
(A note: In general in topology, not even transfinite sequences are good enough for general limits, and you need nets/filters. But for ordered fields, transfinite sequences (of length equal to the field’s cofinality) are sufficient. Hence the focus on transfinite sequences rather than being ultra-general and using nets.)
Note that of course the hyperreals are used for nonstandard analysis, but nonstandard analysis doesn’t involve taking limits in the hyperreals—that’s the point; limits in the reals correspond to non-limit-based things in the hyperreals.
===Stage 3: The surreals break limits as hard as is possible===
So now we have the surreals, which take uncountable cofinality to the extreme. Our cofinality is no longer merely uncountable, it’s not even an actual ordinal! The “cofinality” of the surreals is the “ordinal” represented by the class of all ordinals (or the “cardinal” of the class of all sets, if you prefer to think of cofinalities as cardinals). We have proper-class cofinality.
Limits of sequences are gone. Limits of ordinary transfinite sequences are gone. All that remains working are limits of sequences whose domain consists of the entire class of all ordinals. Or, again, other things with proper-class cofinality; 1/x still goes to 0 as x goes to infinity (again, letting x range over all surreals—note that that that’s a very strong notion of “goes to infinity”!) You still have limits of surreal functions of a surreal variable. But as I keep pointing out, that’s not always good enough.
I mean, really—in terms of ordered fields, the real numbers are the best possible setting for limits, because of the existence of suprema. Every set that’s bounded above has a least upper bound. By contrast, in the surreals, no set that’s bounded above has a least upper bound! That’s kind of their defining property; if you have a set S and an upper bound b then, oops, {S|b} sneaks right inbetween. Proper classes can have suprema, yes, but, as I keep pointing out, you don’t always have a proper class to work with; oftentimes you just have a plain old countably infinite set. As such, in contrast to the reals, the surreal numbers are the worst possible setting for limits.
The result is that doing things with surreals beyond addition and multiplication typically requires basically reinventing those things. Now, of course, the surreal numbers have something that vaguely resemble limits, namely, {left stuff|right stuff} -- the “simplest in an interval” construction. I mean, if you want, say, √2, you can just put {x∈Q, x>0, x^2<2 | x∈Q, x>0, x^2>2}, and, hey, you’ve got √2! Looks almost like a limit, doesn’t it? Or a Dedekind cut? Sure, there’s a huge cloud of infinitesimals surrounding √2 that will thwart attempts at limits, but the simplest-in-an-interval construction cuts right through that and snaps to the simplest thing there, which is of course √2 itself, not √2+1/ω or something.
Added later: Similarly, if you want, say, ω^ω, you just take {ω,ω^2,ω^3,...|}, and you get ω^ω. Once again, it gets you what a limit “ought” to get you—what it would get you in the ordinals—even though an actual limit wouldn’t work in this setting.
But the problem is, despite these suggestive examples showing that snapping-to-the-simplest looks like a limit in some cases, it’s obviously the wrong thing in others; it’s not some general drop-in substitute. For instance, in the real numbers you define exp(x) as the limit of the sequence 1, 1+x, 1+x+x^2/2, etc. In the surreals we already know that won’t work, but if you make the novice mistake in fixing it of instead trying to define exp(x) as {1,1+x,1+x+x^2/2,...|}, you will get not exp(1)=e but rather exp(1)=3. Oops. We didn’t want to snap to something quite that simple. And that’s hard to prevent.
You can do it—there is a theory of the surreal exponential—but it requires care. And it requires basically reinventing whatever theory it is that you’re trying to port over to the surreal numbers, it’s not a nice straight port like so many other things in mathematics. It’s been done for a number of things! But not, I think, for the things you need here.
Martin Kruskal tried to develop a theory of surreal integration back in the 70s; he ultimately failed, and I’m pretty sure nobody has succeeded since. And note that this was for surreal functions of a single surreal variable. For surreal utilities and real probabilities you’d need surreal functions on a measure space, which I imagine would be harder, basically for cofinality reasons. And for this thing, where I guess we’d have something like surreal probabilities… well, I guess the cofinality issue gets easier—or maybe gets easier, I don’t want to say that it does—but it raises so many others. Like, if you can do that, you should at least be able to do surreal functions of a single surreal variable, right? But at the moment, as I said, nobody knows how (I’m pretty sure).
In short, while you say that the surreals solve a lot more problems than people realize, my point of view is basically the opposite: From the point of view of applications, the surreal numbers are basically an attractive nuisance. People are drawn to them for obvious reasons—surreals are cool! Surreals are fun! They include, informally speaking, all the infinities and infitesimals! But they can be a huge pain to work with, and—much more importantly—whatever it is you need them to do, they probably don’t do it. “Includes all the infinities and infinitesimals” is probably not actually on your list of requirements; while if you’re trying to do any sort of decision theory, some sort of theory of integration is.
You have basically no idea how many times I’ve had to write the same “no, you really don’t want to use surreal utilities” comment here on LW. In fact years ago—basically due to constant abuse of surreals (or cardinals, if people really didn’t know what they were talking about) -- I wrote this article here on LW, and (while it’s not like people are likely to happen across that anyway) I wish I’d included more of a warning against using the surreals.
Basically, I would say, go where the math tells you to; build your system to the requirements, don’t just go pulling something off the shelf unless it meets those requirements. And note that what you build might not be a system of numbers at all. I think people are often too quick to jump to the use of numbers in the first place. Real numbers get a lot of this, because people are familiar with them. I suspect that’s the real historical reason why utility functions were initially defined as real-valued; we’re lucky that they turned out to actually be appropriate!
(Added later: There is one other thing you can do in the surreals that kind of resembles a limit, and this is to take a limit of sign sequences. This at least doesn’t have the cofinality problem; you can take a sign-sequence limit of a sequence. But this is not any sort of drop-in replacement for usual limits either, and my impression (not an expert here) is that it doesn’t really work very well at all in the first place. My impression is that, while {left|right} can be a bit too oblivious to the details of the the inputs (if you’re not careful), limits of sign sequences are a bit too finicky. For instance, defining e to be the sign-sequence limit of the partial sums 1, 2, 5⁄2, 8⁄3, 65⁄24… will work, but defining exp(x) analogously won’t, because what if x is (as a real number) the logarithm of a dyadic rational? Instead of getting exp(log(2))=2, you’ll get exp(log(2))=2-1/ω. (I’m pretty sure that’s right.) There goes multiplicativity! Worse yet, exp(-log(2)) won’t “converge” at all. Again, I can’t rule out that, like {left|right}, it can be made to work with some care, but it’s definitely not a drop-in replacement, and my non-expert impression is that it’s overall worse than {left|right}. In any case, once again, the better choice is almost certainly not to use surreals.)
komponisto pointed out earlier that “Hall of Fame” has connotations we may not want. I’m surprised that got changed when nobody seemed to disagree with him. I suggest that be reverted. (Obviously this is minor, though.)
Also, I’d like to repeat my suggestion that the “New on OB” sidebar section have “Overcoming Bias” as a link so I can jump to the blog rather than individual posts.
I want to more or less second what River said. Mostly I wouldn’t have bothered replying to this… but your line of “today around <30” struck me as particularly wrong.
So, first of all, as River already noted, your claim about “in loco parentis” isn’t accurate. People 18 or over are legally adults; yes, there used to be a notion of “in loco parentis” applied to college students, but that hasn’t been current law since about the 60s.
But also, under 30? Like, you’re talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they’re legally adults and there’s no longer any such thing as “in loco parentis”. But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I’m in math) or location or something, I don’t know, but I at least have never heard of that before.
Given a bunch of people who disagree, some of whom are actual experts and some of whom are selling snake oil, expertise yourself, there are some further quick-and-dirty heuristics you can use to tell which of the two groups is which. I think basically my suggestion can be best summarized at “look at argument structure”.
The real experts will likely spend a bunch of time correct popular misconceptions, which the fakers may subscribe to. By contrast, the fakers will generally not bother “correcting” the truth to their fakery, because why would they? They’re trying to sell to unreflective people who just believe the obvious-seeming thing; someone who actually bothered to read corrections to misconceptions at any point is likely too savvy to be their target audience.
Sometimes though you do get actual arguments. Fortunately, it’s easier to evaluate arguments than to determine truth oneself—of course, this is only any good if at least one of the parties is right! If everyone is wrong, heuristics like this will likely be no help. But in an experts-and-fakers situation, where one of the groups is right and the other pretty definitely wrong, you can often just use heuristics like “which side has arguments (that make some degree of sense) that the other side has no answer to (that makes any sense)?”. If we grant the assumption that one of the two sides is right, then it’s likely to be that one.
When you actually have a lot of back-and-forth arguing—as you might get in politics, or, as you might get in disputes between actual experts—the usefulness of this sort of thing can drop quickly, but if you’re just trying to sort out fakers from those with actual knowledge, I think it can work pretty well. (Although honestly, in a dispute between experts, I think the “left a key argument unanswered” is still a pretty big red flag.)
Hoo boy, I have a number of things to say on this.
First, to get this out of the way, I fricking hate punch bug. That would definitely not have been my choice of example, that’s for sure. But, we can substitute in similar things that don’t involve, you know, literally punching people. Other, lesser, sorts of unannounced roughhousing, say. At the very least, I want to assert “The rule ‘never touch anyone without asking’ is not workable” as a sort of minimal example.
But none of that is the real point, and I think is a post that gets at a lot of important things. I do worry that it’s conflating some things, but, oh well, we can peel these apart later.
My comments, from smallest / least related to largest / most core:
It is useful to be able to express weak preferences and have them treated as weak preferences. I notice I can’t do this with some people—any expressed preference is treated as a strong preference, so instead of just being able to say “weak preference in favor of X”, you have to judge for yourself whether to mention X at all. That said, it’s not clear to me that this is in general necessarily due to “social ownership of the micro”; perhaps that’s one thing that can cause it, but that does not at all seem to be the only context this comes up in. Also seeing how “social ownership of the micro” seems to go together with “being explicit about things”, stating explicitly “weak preference” seems like it would avert much of the problem here. But yes it is useful to recognize that not every stated preference is a strong preference even if “weak preference” is not explicitly stated.
I want to expand on one point in there—there’s this idea out there that, when judging what’s socially OK and what’s not, the only two options are “judge by your actual effects on people” or “judge by your intentions”. These are not the only options! “Judge against an agreed-upon standard of reasonableness” is also absolutely an option and has a lot of advantages. (I mean ultimately you have to go by consequences, but always trying to avoid any negative effects on people results in, well, the pathologies that Duncan describes.)
...you know what, I’ll put the rest of what I have to say in a separate reply comment, because this could get long and also touchy, and it probably deserves its own comment...
I can’t make sense of this comment.
If one is talking about one’s preferences over number of apples, then the statement that it is a total preorder, is a weaker statement than the statement that more is better. (Also, you know, real number assumptions all over the place.) If one is talking about preferences not just over number of apples but in general, then even so it seems to me that the complete class theorem seems to be making some very strong assumptions, much stronger than the assumption of a total preorder! (Again, look at all those real number assumptions.)
Moreover it’s not even clear to me that the complete class theorem does what you claim it does, like, at all. Like it starts out assuming the notion of probability. How can it ground probability when it starts out assuming it? And perhaps I’m misunderstanding, but are the “risk functions” it discusses not in utility? It sure looks like expected values of them are being taken with the intent that smaller is better (this seems to be implicit in the definition of r(θ), that r(θ) is measured by expected value when T isn’t a pure strategy). Is that mistaken?
(Possible source of error here: I can’t seem to find a statement of the complete class theorem that fits neatly into Savage/VNM/Cox/etc-style formalism and I’m having some trouble translating it to such, so I may be misunderstanding. The most sense I’m making of it at the moment is that it’s something like your examples for why probabilities must sum to one—i.e., it’s saying, if you already believe in utility, and something almost like probability, it must actually be probability. Is that accurate, or am I off?)
(Edit: Also if you’re taking issue with the preorder assumption, does this mean that you no longer consider VNM to be a good grounding of the notion of utility for those who already accept the idea of probability?)
So, um, OK, time to step in it. Hoping this doesn’t count as a “hot-button political issue”. This is going to echo some things I’ve said over in the comments at Thing of Things, but I’m not going to go digging for those links right now...
It seems to me that some of what Duncan describes here is closely related to that infamous problem which led to such writings as Scott’s old “Meditations” series, Scott Aaronson’s “comment 171″, etc. It’s not so much a matter of “social ownership of the micro”, but rather micro-vs-macro interpretations of certain instructions. Let’s consider the situation in Vignette #1. In it, Alexis and Blake both correctly observe that how you ask a person about something may, at a micro level, affect how comfortable they feel with the possible responses to it.
(This may be a different sense of “micro”. After all—if this micro-emotional effect affects their actual decision, it then necessarily has macro effects! Which is to say, our brains’ all-or-nothing decision systems can take the micro and amplify it. The “micro” here isn’t so much in terms of results but in terms of the fact that what things hinge on here is, things that are small. The details of how a request is phrased or the tone it’s presented in.)
So, if a person X reads writings—written by people thinking in macro terms, or who don’t recognize that these micro-influences exist—about how it is wrong to ever pressure anyone, but interprets them in a micro manner… yeah. You see how it goes. That’s not a matter of “social ownership of the micro”, but it is a matter of recognizing the micro and considering it as morally relevant and attempting to account for it.
Let me step away from this narrow and hopefully not too hot-button point to make a broader one: We need what I’ll call “a theory of legitimate influence”; I think the lack of one is a big part of where Alexis/Blake situations (or the more controversial one I just mentioned) come from. (What do I mean by “we”? Um. I dunno. People. I dunno.)
What do I mean by a “theory of legitimate influence”? Well, let me start by presenting three examples:
“Classical liberal” theory of legitimate influence—People are more or less rational agents who can be trusted to act in their own interests. So force, coercion, and lying to people to get them to do what you want are wrong, but basically anything else is OK; so long as you’re honest and don’t use force it’s not like you’ll thereby get people to do things they don’t want to do.
“Nerd” theory of legitimate influence—People are kind of like rational agents, but can be biased by all sorts of arational considerations. So, influencing people via reasoned argument is OK, as you’re just helping them by providing them with good reasoning, but attempting to influence them by other, necessarily arational means is wrong.
“Sociopathic” theory of legitimate influence—Other people don’t really have preferences, or to the extent they do they don’t matter; do anything you want.
The thing here is that I don’t think any of these three theories are tenable. #3 is blatantly immoral. #1 is better but fails to rule illegitimate things such as, say, badgering a person—no, it’s not using violence or threats or lying, but it’s still wrong. Meanwhile #2 makes the opposite mistake—it’s impossible not to influence people arationally, every interaction you have with other people is full of such things, so to take this seriously leads to total paralysis.
(And to be clear I am not in any way restricting this to the sorts of situations I initially implicitly mentioned at the top here—this is a generic phenomenon, one that comes up in any request or offer or attempt to organize people to do something. That said, it obviously has an especial relevance to such situations, because clearly being charming—which is a thing that one does that influences other people’s responses! -- should be considered legitimate influence, it would seem.)
So, by a “theory of legitimate influence”, I mean a theory of which ways of influencing other people are—without having to do constant consequentialist reasoning—morally permissible (reasoned argument: definitely OK) and which are not (death threats: definitely not OK).
And I think in order to develop one that works, there are several insights one needs to take into account, that commonly expressed ones don’t. A few I can think of:
Once you go micro enough, once you get to the level where people can be influenced, you hit the problem that at such a level there often isn’t really such a thing as preexisting un-influenced preferences. If your theory assumes people always have definite preferences, or says that doing anything that shapes other people’s preferences is wrong, it’s not going to work.
On the other hand, to state the obvious, sometimes people really do act like agents and have actual preferences—on the larger scale this approximation definitely holds—and if you ignore this then you also will have a problem. (I mean, duh, but worth restating in light of #1.)
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it! That is to say: There are forms of influence that feel like another person imposing their will upon you, and there are those that make you more likely to do something but just… don’t. Ego-dystonic vs ego-syntonic influence, one might call it. To my mind this is really morally relevant, not just in the heuristic sense I’m mostly talking about but also in a more direct consequentialist sense. But it seems like commonly-expressed theories of influence don’t really take any notice of this, even as the people expressing those theories don’t in fact condemn ego-syntonic influence as their explicitly stated theories make it seem like they would; they just seem to fail to recognize such things as influence at all, as if of course the way the request was phrased had no effect on their response because they had a fixed preexisting preference, duh. (Important note: Getting someone else to internalize some burdensome moral obligation that they feel terrible about and can’t actually enact and have to doublethink around is also not OK, just in case that was not clear! This doesn’t really fit cleanly into either of the two categories above—the obligation isn’t exactly external, no, but it still feels bad—it’s its own third thing; but morally it’s clearly on the bad side of the line.)
Maximizing autonomy—maximizing the ability of the person you’re talking to both to say “yes”, and to say “no”, and to say “I reject the question”—may not always be possible. Ideally you could accomplish all of these simultaneously, but in reality they often trade off against one another. At a macro level, if you don’t sweat the micro, you can of course accomplish all three, by just, you know, not putting people under horrible pressure or anything, and also explicitly noting the “I reject the question” option, which people often have a hard time coming up with for themselves. But if you care about the micro level, well, a that level there will likely be a tradeoff, and if you don’t account for that or just refuse to make any tradeoffs, you may encounter a problem.
I don’t know—it’s possible I just have less definite preferences / am more influenceable than other people and so think of this as more important; e.g. in previous discussions over at Thing of Things, Nita responded to my comments on the matter with, no, I really do have definite fixed preferences, none of this applies to me. So, IDK, typical mind fallacy very possible here. But, I think this is important, I think this is worth pointing out.
Oh, huh—looks like this paper is the summary of the blog series that “Slime Mold Time Mold” has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P
I think what you call “simplex” is essentially just what the existing English word “simplistic” refers to.
Took the survey.