I don’t know for sure whether we’re really disagreeing. Perhaps that’s a question with no definite answer; the question’s about where best to draw the boundary of an only-vaguely-defined term. But it seems like you’re saying “goal-thinking must only be concerned with goals that don’t involve people’s happiness” and I’m saying I think that’s a mistake and that the fundamental distinction is between doing something as part of a happiness-maximizing process and recognizing the layer of indirection in that and aiming at goals we can see other reasons for, which may or may not happen to involve our or someone else’s happiness.
Obviously you can choose to focus only on goals that don’t involve happiness in any way at all, and maybe doing so makes some of the issues clearer. But I don’t think “involving happiness” / “not involving happiness” is the most fundamental criterion here; the distinction is actually, as your original terminology makes clear, between different modes of thinking.
I see things slightly differently.
Happiness, suffering, etc., function as internal estimators of goal-met-ness. Like a variable in a computer program that indicates how you’re doing. Hence, trying to optimize happiness directly runs the risk of finding ways to change the value of the variable without the corresponding real-world things the variable is trying to track. So far, so good.
But! That doesn’t mean that happiness can’t also be a thing we care about. If I can arrange for someone’s goals to be 50% met and for them to feel either as if they’re 40% met or as if they’re 60% met, I probably choose the latter; people like feeling as if their goals are met, and I insist that it’s perfectly reasonable for me to care about that as well as about their actual goals. For that matter, if someone has goals I find terrible, I may actually prefer their goals to go unmet but for them still to be happy.
I apply the same to myself—within reason, I would prefer my happiness to overestimate rather than underestimate how well my goals are being met—but obviously treating happiness as a goal is more dangerous there because the risk of getting seriously decoupled from my goals is greater. (I think.)
I don’t think it’s necessary to see nonexistence as neutral in order to prefer (in some cases, perhaps only very extreme ones) nonexistence to existence-with-great-suffering. Suffering is unpleasant. People hate it and strive to avoid it. Yes, the underlying reason for that is because this helps them achieve other goals, but I am not obliged to care only about the underlying reason. (Just as I’m not obliged to regard sex as existing only for the sake of procreation.)
Nitpick: near the end you have written P∨¬P where you mean P∧¬P.
Less-superficial observation about your argument at that point: What you quote is not, strictly, a contradiction unless you take “you cannot decide for the lives of others” more literally than anyone could reasonably think the person who said that meant it. Suppose they had said instead something like this: “In general you cannot decide for the lives of others, but when people commit major crimes they forfeit the right to decide the course of their lives thereafter, and sometimes there’s no way to avoid someone’s life being governed by someone else, and the best you can do is to try to minimize the harm. If someone commits murder, then pretty much everyone agrees that that gives us the right to use considerable force and coercion to stop them doing it again; I think we should do it by killing them rather than by locking them up for decades. If someone wants to have an abortion, then either they decide for the life of their unborn child or we decide for their life by stopping them; I think the latter is the lesser evil.” I think it’s clear that (1) there is no contradiction in that, and that (2) at least some people saying what you quoted a politician as saying actually mean something like that, and could be induced to explain themselves more thoroughly by suitable questioning. (I am not endorsing that position to any greater degree than saying it isn’t outright contradictory; in particular, it is not my own position.)
I’m not convinced. To me, at least, my goals that are about me don’t feel particularly different in kind from my goals that are about other people, nor do my goals that are about experiences feel particularly different from my goals that are about things other than experiences.
(It’s certainly possible to draw your dividing line between, say, “what you want for yourself” and “what other things you want”, but I think that’s an entirely different line from the one drawn in the OP.)
Is that really true? If you can have “have other people not suffer horribly” as a goal, you can have “not suffer horribly yourself” as a goal too. And if, on balance, your life seems likely to involve a lot of horrible suffering, then suicide might absolutely make sense even though it would reduce your ability to achieve your other goals.
I bite the bullet: I aim to use only goal-based thinking. (I dare say I don’t completely succeed.) I may have goals like “enjoy eating a tasty meal” or “stop feeling hungry” but those are still goals rather than what you’re calling desires.
I don’t think the two examples in your final paragraph are isomorphic, and I think they can be seen to be non-isomorphic in purely goal-based terms.
All else being equal, I prefer people to live rather than die, and I prefer that my preferences be satisfied. Taking a murder-pill would mean that more people die (at my hand, even) or that my preferences go unsatisfied, or both. So (all else being equal) I don’t want to take the murder-pill.
All else being equal, I prefer to eat things that I like and not things that I don’t like. I (hypothetically) don’t like spinach right now, so I don’t eat spinach. But if I suddenly started liking spinach, I would become able to eat spinach and thereby eat things I like rather than things I don’t. So I would expect to have more of my preferences satisfied if I started liking spinach. So (all else being equal) I do want to start liking spinach.
All of this is a matter of goals rather than (in your sense) desires. I want people to live, I want to have my preferences satisfied, I want to eat things I like, I want not to eat things I dislike.
“But”, I hear you cry, “you could equally well say in the first place ‘I prefer to live according to my moral principles, and at present those principles include not murdering people, but if I took the pill those preferences would change.‘. And you could equally well say in the second place ‘I prefer not to eat spinach, and if I started liking spinach then I’d start doing that thing I prefer not to.’. And then you’d get the opposite conclusions.” But no, I could not equally well say those things: saying those things would give a wrong account of my preferences. Some of my preferences (e.g., more people living and fewer dying) are about the external world. Some (e.g., having enjoyable eating-experiences) are about my internal state. Some are a mixture of both. You can’t just swap one for the other.
(There’s a further complication, which is that—so it seems to me, and I know I’m not alone—moral values are not the same thing as preferences, even though they have a lot in common. I not only prefer people to live rather than die, I find it morally better that people live rather than die, and those are different mental phenomena.)
I think you are assuming that “utility” means something like “happiness”. That is not the only possible way to use the word.
If there is a term in my utility function (to whatever extent I have a utility function) for accurate knowledge, then there can be situations indistinguishable to me to which I assign different utility, because I may be unable to tell whether some bit of my “knowledge” is actually accurate or not.
I think maybe you think there is something impossible or incoherent about this, perhaps on the grounds that it’s absurd to say you care about the difference between X and Y when you cannot actually discern the difference between X and Y. I disagree. If you tell me that you are either going to shoot me in the head or shoot me in the head and then murder a million other people, I prefer the former even though, being dead, I will be unable to tell whether you’ve murdered the million others or not. If you tell me that you will either slap me in the face and then shoot me dead, or else shoot me dead and then murder a million others, and if I believe you, then I will gladly take that slap in the face. If you tell me that you will either slap me in the face, convince me that you aren’t going to murder anyone else, kill me, and then murder a million others, or else just kill me and the million others, I will not take the slap in the face even if I am confident that you could convince me. (Er, unless I think that the time you take convincing me makes it more likely that somehow you never actually get to murder me.)
My utility function (to whatever extent I have a utility function) maps world-states to utilities, not my-experience-states to utilities. There is of course another function that maps my-experience states to utilities, or maybe to something like probability distributions over utilities (it goes: experience-state → my beliefs about the state of the world → my estimate of my utility function), but it isn’t the same function and it isn’t what I care about even if in some sense it’s necessarily what I act on: if you propose to change the world-state and the experience-state in ways that don’t match, then my preferences track what you propose to do to the world-state, not the experience-state.
(Of course my experiences are among the things I care about, and I care about some of them a lot. If you threaten to make me wrongly think you have murdered my family then that’s a very negative outcome for me and I will try hard to prevent it. But if I have to choose between that and having my family actually murdered, I pick the former.)
I don’t know whether Halloween really does help people come to terms with death, or anything like that, by coupling it with absurdity, but I don’t think “we don’t do standup comedy at funerals, so we don’t really believe in meeting death with absurdity” is an argument that holds much water. Standup comedy at funerals runs the risk of offending (perhaps very severely) individuals associated with the deceased party. (And in some parts of our culture there is something not altogether unlike that: consider the old joke that the difference between an Australian wedding and an Australian funeral is that there’s one drunk fewer at the funeral. Funerals and wakes can be pretty rowdy.)
Singer’s proposal in that article isn’t _quite_ that, though it may be that he just didn’t think it through carefully enough (or deliberately simplified in an article intended for general consumption). He proposes that the fraction you give of your _total_ income should be, if you’re in the top [10%, 1%, 0.1%, 0.01%], [10%, 15%, 25%, 33%], producing discontinuities at the boundaries of those groups. I suspect that if pressed on that point he’d be happy to go with something smoother.
In my opinion this makes your post valueless.
(Not to say that you should explain what tools. But I think either saying nothing or being informative must be better than posting this as it is.)
There is no objective fact of the matter regarding moral standards. Rather, we want a moral system that can be widely adopted and that when widely adopted promotes things we find good.
A moral system that said “you have to spend every waking moment curing malaria and feeding the hungry” would probably either just make people feel burned out and miserable or else be rejected outright. Many imaginable and prima facie plausible moral systems turn out to say that. A moral system that said “just do whatever the hell you want” would probably lead to few people bothering to cure malaria and feed the hungry.
It seems plausible to me that a system that says “you should be making things better for others but it’s fine to devote most of your time and energy and resources to your own welfare and that of your family” does, given human nature, actually roughly maximize net good done. I expect the optimum is more demanding than the average person’s actual moral system, but probably not (much?) more demanding than the average effective altruist’s.
Immediately after the bit about monkeys there’s this
The usual goal in the typing monkeys thought experiment is the production of the complete works of Shakespeare. Having a spell checker and a grammar checker in the loop would drastically increase the odds. The analog of a type checker would go even further by making sure that, once Romeo is declared a human being, he doesn’t sprout leaves or trap photons in his powerful gravitational field.
which feels like a bit of an own goal to me, because I suspect the analogue of a type checker would actually make sure that once Romeo is declared a Montague it’s a type error for him to have any friendly interactions with a Capulet, thus preventing the entire plot of the play.
Let’s take a somewhat-concrete example. Your post mentions birds. OK, so let’s consider e.g. a model of birds flying in a flock, how they position themselves relative to one another, and so on. You suggest that we consider the birds as objects: so far, so good. And then you say “they do stuff like fly, tweet, lay eggs, eat, etc. I.e., verbs (morphisms).” For the purpose of a flocking model, the most relevant one of those is flying. How are you going to consider flying as a morphism in a category of birds? If A and B are birds, what is this morphism from A to B that represents flying? I’m not seeing how that could work.
In the context of a flocking model, there are some things involving two birds. E.g., one bird might be following another, tending to fly toward it. Or it might be staying away from another, not getting too close. Obviously you can compose these relations if you want. (You can compose any relations whose types are compatible.) But it’s not obvious to me that e.g. “following a bird that stays away from another bird” is actually a useful notion in modelling flocks of birds. It might turn out to be, but I would expect a number of other notions to be more useful: you might be interested in some sort of centre of mass of a whole flock, or the density of birds in the flock; you might want to consider something like a velocity field of which the individual birds’ velocities are samples; etc. None of these things feel very categorical to me (though of course e.g. velocities live in a vector space and there is a category of vector spaces).
Maybe flocking was a bad choice of example. Let’s try another: let the birds be hens on a farm, kept for breeding and/or egg-laying. We might want to understand how much space to give them, what to feed them, when to collect their eggs, whether and when to kill them, and so on. Maybe we’re interested in optimizing taste or profit or chicken-happiness or some combination of those. So, according to your original comment, the birds are again objects in a category, and now when they “lay eggs, etc., etc.” these are morphisms. What morphisms? When a bird lays an egg, what are the two objects the morphism goes between? When are we going to compose these morphisms and what good will it do us?
How does it actually help anything to consider birds as objects of a category?
Here’s the best I can do. We take the birds, and their eggs, and whatever else, as objects in a category, and we somehow cook up some morphisms relating them. The category will be bizarre and jury-rigged because none of the things we care about are really very categorical, but its structure will somehow correspond to some of the things about the birds that we care about. And then we make whatever sort of mathematical or computational model of the birds we would have made without category theory. So now instead of birds and eggs we have tuples (position, velocity, number of eggs sat on) or objects of C++ classes or something. Now since we’ve designed our mathematical model to match up, kinda, to what the birds actually do, maybe we can find a morphism between these two jury-rigged categories corresponding to “making a mathematical model of”. And then maybe there’s some category-theoretic thing we can do with this model and other mathematical models of birds, or something. But I gravely doubt that any of this will actually deliver any insight that we didn’t ourselves put into it. I’d be intrigued to be proved wrong.
I’m really not convinced by this framing in terms of “objects doing things to other objects”.
Let’s take a typical example of a morphism: let’s say f:Z>0→R (note for non-mathematicians: that is, f is a function that takes a positive integer and gives you a real number) given by f(n)=√n. How is it helpful to think about this as Z>0 doing something to R? How is it even slightly like “Alice pushes Bob”? You say “Every model is ultimately found in how one object changes another object”—are you saying here that the integers change the real numbers? Or vice versa? (After that’s done, what have the integers or the real numbers become?)
The only thing here that looks to me like something changing something else is that f (the morphism, not either of the objects) kinda-sorta “changes” an individual positive integer to which it’s applied (an element of one of the objects, again not either of the objects) by replacing it with its square root.
But even that much isn’t true for many morphisms, because they aren’t all functions and the objects of a category don’t always have elements to “change”. For instance, there’s a category whose objects are the positive integers and which has a single morphism from x to y if and only if x≤y; when we observe that 5≤9, is 5 changing 9? or 9 changing 5? No, nothing is changing anything else here.
So far as I can see, the only actual analogy here is with the bare syntactic structure: you can take “A pushes B” and “A has a morphism f to B” and match the pieces up. But the match isn’t very good—the second of those is a really unnatural way of writing it, and really you’d say “f is a morphism from A to B”, and the things you can do with morphisms and the things you can do with sentences don’t have much to do with one another. (You can say “A pushes B with a stick”, and “A will push B”, and so forth, and there are no obvious category-theoretic analogues of these; there’s nothing grammatical that really corresponds to composition of morphisms; if A pushes B and B eats C, there really isn’t any way other than that to describe the relationship between A and C, and indeed most of us wouldn’t consider there to be any relationship worth mentioning between A and C in this situation.)
This also helps to train your intuition, in the cases where careful calculation reveals that in fact the intuitive answer was wrong.
It seems a bit odd to offer lambda calculus as an example of how category theory is useful in computing, when lambda calculus predates category theory by about a decade (1932 to 1942).
It’s more usual for topology to motivate category theory than the other way around. (That’s where category theory originally came from, historically.)
It seems extremely unfortunate that the terminology apparently shifted from “counterfactually valid” (which means the right thing) to “counterfactual” (which means almost the opposite of the right thing).
I would be interested to know how you see spite as “not necessarily negative”.
I don’t see the big shiny red button on the front page. If I visit LW in private mode, it’s there. I have the map turned off. I haven’t tried logging out or turning the map back on. I’m guessing that when Ben says it’s “over the frontpage map” that means it’s implemented in a way that makes it disappear if the map isn’t there. That seems a bit odd, though it probably isn’t worth the effort of fixing.
(I have a launch code but hereby declare my intention not to use it. I am intrigued by the discussions of trading launch codes, or promises to use or not use them, for valuable things like effective charitable donations, but am not interested in taking either side of any such trade.)