So your thesis is not that rationality dooms civilization, but only that as far as we know, it might. I get it now.
torekp
The Mere Addition Paradox suffices to refute the AVG view. From Nick’s link:
Scenario A contains a population in which everybody leads lives well worth living. In A+ there is one group of people as large as the group in A and with the same high quality of life. But A+ also contains a like number of people with a somewhat lower quality of life. In Parfit’s terminology A+ is generated from A by “mere addition”. Comparing A and A+ it is reasonable to hold that A+ is better than A or, at least, not worse.
For example, A+ could evolve from A by the choice of some parents to have children whose quality of life is good, though not as good as the average in A. We can even suppose that this makes the parents a little happier, while still lowering the overall average.
Operant conditioning is an excellent answer as to why you do care more about your future self than a random future person. But the original post asks why should you care more.
Of course, it’s open to you to argue that there’s less room in between “should care” and “do care” than most people think. Perhaps when it comes to both whom and when we care about, there isn’t much room at all.
Even going by what people do care about, however, I doubt that anterograde amnesia generally leads to disregard of one’s next-day fate. Should it?
While those are good reasons, I suggest that the biggest reason is simply that present-me cares about “me”. And “me” is a temporally extended person who includes future-me. There doesn’t seem to be anything particularly irrational about self-concern, any more than there is anything irrational about being a Red Wings fan if you happen to live in Detroit. (Or particularly rational, for that matter.)
There can be a separable sense of “should” that indicates rationality. Thus, “we should sign the treaty” can be an interesting truth for both parties when the “should” is that of rationality, and true for both parties but only interesting from the human side when the “should” is a moral should.
This commits one to what philosophers call moral externalism, namely, the view that what is morally required is not necessarily rationally required. Which is not a reason to reject the view, but I expect it will be criticized.
I like the metaphor of the peacenik wanting to rid the world of violence by suggesting that police not use weapons. Let’s elaborate on the analogy between Dark Arts and violence.
Tit For Tat is a common policy for trying to control violence. One obvious and much lamented flaw in the strategy is that it tends to foster cycles of violence, with each side claiming that the other side started it and that “they” use more vicious tactics.
To get past the problems of biased measurement of proportional response and so on, and thereby break the cycle of violence, at least one side has to undertake to dramatically de-escalate—while still providing a deterrent to unrestricted use of force. The last point is essential. Absolute pacifism may well work brilliantly as a response to some provocateurs, but not all.
I like to summarize my views on violence by saying that the only thing worse than two armies fighting a war is one army fighting a war. If rationalists foreswear the Dark Arts, the result will be one army fighting a war. And we’ll be the targeted civilians. Well, not quite. Reason is roughly comparable to small arms; the Dark Arts are heavy artillery and airpower. Or so I fear. If anyone has links to research on the relative persuasive powers of reason and unreason, it would certainly help clarify the issue.
That’s a very real danger, but that’s where the “dramatically de-escalate” part comes in. One can also call foul on one’s own side when excessively dark maneuvers are used.
Thanks for the pointer. There is much discussion in philosophy of the difficulties and possibilities of disentangling information about subjective experience as such from memories, verbal reports, and the like. See (http://schwitzsplinters.blogspot.com/) Eric Schwitgebel’s Descriptive Experience Sampling experiment, for example. Kahneman’s research will certainly help fuel the fire; I hope it also advances the debate.
providing for one’s own future prosperity is generally considered a question of wisdom rather than morals. However, given that a person and his future mind moment are merely similar entities connected by a near-continuous transformation, rather than being the same entity, it would seem that there must be a symmetry between the moral implications of a person’s treatment of others and of his own future.
Agreed. Caring in a deep and personal way about those future mind moments is nearly universal, but no more rationally compelling than caring about other mind moments. One might say that evolution has been keener to instill empathy for one’s own future than for the mind moments of others. When it comes to considerations of rationality, the main difference is that, if you care for your own future but disrespect other people, others can typically retaliate in ways that hurt what you value. Whereas, if you care for other people but disrespect your own future, your future self is utterly powerless.
Calvin and Hobbes had a good line on the powerlessness of the future self, in the time travel series. But I suppose I should abstain from providing a link to material that probably violates copyright. So I’ll just mention that at one point, Calvin’s past, present, and future selves all argue, and one of them says “Go ahead and hit me—my future self will be the one who hurts.”
Bypassing the question of terminal values, it would still be very useful to have a good argument map of factual issues which are hotly disputed.
I like Morendil’s three-part distinction because it foregrounds b, what we want to have happen. That’s there in the six hats implicitly (especially feelings, critical judgment, and positive aspects), but it seems to be too focused on the particular proposal. What’s good about this proposal, what’s bad about it, how do I feel about it—all are asking secondary questions, when the primary questions should be what’s good, what’s bad, and what might be better—about the whole situation. Foregrounding “what we want to have happen” could be helpful in thinking about cryonics. What kind of future living do I want to have happen—ones in which future experiencers remember my experiences? Ones in which future agents carry out my (present?) goals? Ones in which some future person is me (and what does that mean)? Etc.
But I like the foregrounding of lateral thinking in one of the six hats (green hat). To my mind this is usually the most neglected step in human decision-making. Scott Adams (the Dilbert author) tells the story of a businessman who was notorious for bringing ten new ideas to every business meeting, at least nine of which were incredibly bad. The businessman was Ted Turner, founder of CNN. Having bad ideas costs extremely little—especially in a context where multiplication of ideas is the norm and evaluation of ideas is deliberately postponed. Having ideas in general, i.e. brainstorming, also costs little.
I think that both the nihilism and the “joy in the merely real” come from a sort of subjective imagining and have very little connection to knowledge. The people for whom materialism threatens nihilism at first imagine themselves to be living in one sort of world; then, they imagine another sort of world, and they have those responses. Meanwhile, the self-identified materialists have been having their experiences while already imagining themselves to be living in a materialist world, so they don’t see a problem.
Doesn’t this support simplicio’s thesis? If there’s little connection to knowledge—which I take to mean that neither emotional response follows logically from the knowledge—then epistemic rationality is consistent with joy. And where epistemic rationality is not at stake, instrumental rationality favors a joyful response, if it is possible.
Or one could be selfish according to a non-fundamental, ontologically reducible continuity. At least, I don’t see why not. Has anyone offered an argument for pattern over process?
randallsquared has it dead right, I think.
I like this example because it has nice tidy prior probabilities. That’s very much lacking in the Doomsday Argument—how do you distribute a prior over a value that has no obvious upper bound? For any finite number of people that will ever live, is there much greater than zero prior probability of that being the number? Even if I can identify something truly special about the reference class “among the first 100 billion people” as opposed to any other mathematically definable group—and thus push down the posterior probabilities of very large numbers of people eventually living—it doesn’t seem to push down very far.
I think I can see why instrumental rationality could be regarded as just part and parcel of epistemic rationality. Once the probabilities have been rationally evaluated, what work is left for “instrumental reason” to do? Am I on the right track at all? If not, please elaborate.
Sean, that’s a useful link. The “irreducible-pattern” epistemological version of emergence, described there, is one I’d heard before. It definitely wouldn’t fit everything (if I had to bet, I’d bet it fits nothing).
As far as I know, Pol Pot’s government “wins” the democide contest, having killed off about 30% of the Cambodian population.
The extent to which you can control the peer groups your kids socialize with is quite large. Some religious sects, for example, control that socialization very tightly. The wisdom of such an approach is debatable, but it’s definitely possible. A hybrid approach might be to influence (rather than strictly control) the peer-selection process and also attempt to immunize your kids to the worst aspects of their peer culture.
Sadly, this is often the equivalent of tilting at windmills. The kids’ blind conformity and fanatical adherence to their peer group norms, and their fervor to ruthlessly punish and ostracize their peers who fail to live up to them or who end up assigned low status according to them, is rarely matched by even the most fanatical and close-minded adults.
As Bongo said, “teach them to hide it”. That is, let them know that they can outwardly go along with peer group standards while inwardly reserving judgment, or holding a different judgment. Also, teaching kids social skills (primarily, how to make friends) allows them to participate in multiple, sometimes overlapping groups. That will enhance the ability to reserve judgment, first on what the groups differ on, and later also on what they share.
homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil’s argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.