Where can I find survey results? I had just been thinking I’d be interested in a survey, also hopefully broken down by frequency of postings and/or karma. But if they’ve been done, in whatever form, great.
Bart119
- 3 Apr 2013 20:34 UTC; 38 points) 's comment on Open Thread, April 1-15, 2013 by (
OK, this is my first day posting on Less Wrong. This topic “Don’t Plan For the Future” interests me a lot and I have a few ideas on it. Yet it’s been inactive for over a year. Possibilities that occur to me: (1) the subject has been taken up more definitively in a more recent thread, and I need to find it, (2) because of the time lag, I should start a new “Discussion” (I think I have more than 2 karma points already so it’s at least possible) even if it’s the same basic idea as this, (3) I should post ideas and considerations right here despite the time lag. If there’s some guide that would answer this question, I’ll happily take a pointer to that as well.
Should we devote resources to trying to expand across the galaxy and thus influence events millions of years in the future? I say no.
I’ve been thinking about this question for many years, and it’s just in the past few days I’ve learned about the Singularity. I don’t at the moment assign a very high probability to that—yes, I’m ignorant, but I’m going to leave it out for the moment.
Suppose we posit that from some currently unavailable combination of technology, physics, psychology, politics and economics (for starters) we can have “legs” and cover interstellar ground. We also crucially need a density of planets that can be exploited to create the vibrant economies that could launch other expensive spacecraft to fuel exponential growth. If we’re going to expand using humans, we have to assume a rather high density of not just planets that can support intelligent life, but planets that can support our particular form of intelligent life—earth-like planets. We have to assume that those planets have not evolved competent, intelligent life of their own—even if they are far behind us technologically, their inherent advantages of logistics could very well keep us from displacing them. But on the plus side, it also seems highly likely that if we can get such a process of exponential growth going in our corner of the galaxy, it could then be expanded throughout our galaxy (at the least).
If we can do it, so can they—actually, they already did.
To expand that, I attach great importance to the fallacy of human exceptionalism. Over history we’ve had to give up beliefs about cultural and racial superiority, humans being fundamentally different from animals, the earth being the center of the universe, the sun being the center of the universe… The list is familiar.
We’ve discovered stars with planets. Perhaps fewer have small, rocky (non-gas giant) planets than theories initially suggested, but there are a few (last I knew) and that’s just a small adjustment in our calculations. We have no evidence whatsoever that our solar system is exceptional on the scale of the galaxy—there are surely many millions of rocky planets (a recent news story suggests billions).
Just how improbable is the development of intelligent life? I’d be interested to know how much deep expertise in biology we have in this group. The 2011 survey results say 174 people (16%) in the hard sciences, with some small fraction of that biologists? I claim no expertise, but can only offer what (I think) I know.
First, I’d heard it guessed that life developed on earth just about as soon as the earth was cool enough to allow its survival. Second, evolution has produced many of its features multiple times. This seems to bear on how likely evolution elsewhere is to develop various characteristics. If complicated ones like wings and eyes and (a fair amount of) intelligence evolved independently several times, then it wasn’t just some miraculous fluke. It makes such developments in life on other planets seem far more probable. Third, the current time in earth history does not have a special status. If intelligent life hadn’t evolved on earth now, it had a few billion more years to happen.
Based on those considerations, I consider it a near certainty that alien civilizations have developed—I’d guess many thousands in our galaxy as a minimum. It’s a familiar argument that we should assume we are in the middle of such a pack temporally, so at the least hundreds of civilizations started millions of years ago. If expansion across the galaxy was possible, they’d be here by now. The fact that we have detected no signals from SETI says virtually nothing—that just means there is nobody in our immediate vicinity who is broadcasting right now.
Since we haven’t observed any alien presence on earth, we would have to assume that civilization expansion is not independent—some dominant civilization suppresses others. There are various possibilities as to the characteristics of that one civilization. They might want to remain hidden. They might not interfere until a civilization grows powerful enough to start sending out colonies to other worlds. Perhaps they just observe us indefinitely and only interfere if we threaten their own values. Even in some benign confederation, where all the civilizations share what they have to offer, we would offer just one tiny drop to a bucket formed from—what, millions? -- of other civilizations. What all of these have in common is that it is not our values that dominate the future: it’s theirs.
It seems likely to me that my initial assumption about exponential space colonization is wrong. It is unfashionable in futurist circles to suggest something is impossible, especially something like sending colonists to other planets, something that doesn’t actually require updates to our understanding of the laws of physics. Critics point out all the other times someone said something was impossible, and it turned out that it could be done. But that is very different from saying that everything that seems remotely plausible can in fact be done. If I argued against interstellar colonization based on technical difficulties, that would be a weak argument. My argument is based on the fact that if it were possible, the other civilizations would be here already.
This argument extends to the colonization potential of robots produced in the aftermath of the Singularity. If their robots could do it, they’d be here already.
To achieve the huge win that would make such an expensive, difficult project worthwhile, exponential space colonization has to be possible, and we have to be the first ones. I think both are separately highly unlikely, and in combination astronomically unlikely.
I should add that I know this is probably wrong in some respects, and I’m very interested in learning what they are.
Hmmmm. Nearly two days and no feedback other than a “-1” net vote. Brainstorming explanations:
There is so much wrong with it no one sees any point in engaging me (or educating me).
It is invisible to most people for some reason.
Newbies post things out of synch with accepted LW thinking all the time (related to #1)
No one’s interested in the topic any more.
The conclusion is not a place anyone wants to go.
The encouragement to thread necromancy was a small minority view or intended ironically.
More broadly, there are customs of LW that I don’t understand.
Something else.
Survey of older folks as data about one’s future values and preferences?
My IQ is somewhere in the 130s, and a standard deviation is usually something like 12-15 points, so taking advice from my future self would be like taking advice from a normal 100 IQ person now! I don’t pay terribly much attention to what such people say… I’d still pay a lot of attention to any message from the future because my future dim elderly self has all the fruits of my higher IQ periods to draw on, but this observation is enough to largely eliminate the interest of contemporary averages.
My suggestion wasn’t that older people would be smarter or think more clearly, or even have access to some fount of wisdom that the young don’t have. It was that their values and preferences change. To take a made-up example (though more plausible than some I could think of), suppose that 95% of 60-year-olds say that they seriously regret having had any body piercings. If you at 25 are considering a body piercing, you might do your utility calculation figuring your enjoyment of it now on the plus side, and then subtracting your expected displeasure with it as you get older. This could conceivably come in to play on such questions as whether to spend those extra 2 years finishing your Ph.D. too.
Once it’s shown conclusively to work no one will want it anymore :)
I don’t get the joke or reference, and it sounds intriguing. Does it mean that if people can be revived successfully into indefinite lifespans, then there would be no need to freeze people going forward?
My big problem with indefinite lifespans is that I think we’re already a warped society by having so many old people (meaning, say older than me at 57 :-)). I suppose if we could first keep everyone from aging and retaining their 25-year-old physiques and energy and mental status, that would address that to some extent. But if we get a world full of reasonably spry 80-year-olds, it doesn’t appeal to me. In my book of values, all else being equal, society is supposed to be half children.
Thought experiment: Suppose we suddenly developed the technology to revive everyone who has ever lived (they left some sort of holographic signal that Google finds it can read :-)). Would we want to? Historians would be overjoyed to revive selected ones because they would help us understand the past. But as a matter of restoring them for their own sake?
As a newcomer I’m sure these have been discussed over and over, and pointers to the relevant discussions are welcome in place of rehashing old arguments.
You can distinguish the two. Older folks can learn from younger ones based on specific experience. Consider: Bob might be considering law school as a career change at 40 and learn from a 30-year-old who started the practice of law at 25 that it was not fun.
You can certainly imagine that age itself, or things that strongly correlate with age, could bring a different perspective. Another trivial sort of example: you decide at 50 that you want to buy a home where you’ll never have to move again, and you are considering a condo that’s on the 4th floor with no elevator. The wisdom of 80-year-olds might say that’s unwise.
The point, of course, is to investigate to find less obvious examples—if any.
For some young people, there might be some discomfort in admitting this as a relevant source of data about how to live life.
The example I’ve read about of whether to finish your Ph.D. could even be relevant here. If someone did a survey showing that 75% of old folks who dropped out of Ph.D. programs wished they’d finished them, would that be relevant? It certainly wouldn’t decide the issue, but I think it would be a factor. And you’d have to factor in or out various cognitive biases.
(I was in exactly that position myself, and decided to finish the Ph.D. It made sense in my case because I didn’t have a burning passion to get on the next thing in life (nor did I know what that would be). But I was correct that I would never directly need it.).
(Example changed because the piercing example equivocates possible mistakes by 16-year-olds and 25-year-olds in the 95% figure)
You meant “equates” instead of “equivocates”? Even with that change I’m not sure quite what you mean. Maybe not that important.
These speculations are interesting. I think it’s always worth wheeling evolutionary thought up to a problem to see what it says.
However, surveying real people in our real, modern-day world seems far more direct.
I don’t think either that evolution would have much of a reason to cleanly engineer a stable end-state after which development just entirely stops, and leaves you with a well-adjusted, perfectly functional body or brain. That may not be a trivial task after all.
Evolution is constantly making trade-offs, and (last I knew) the reason our bodies fall apart was that evolution didn’t have a strong incentive to keep them going. We last as long as we do because we take care of grandkids, maybe, and Jared Diamond suggested a reason for longevity was that an old person was a storehouse of knowledge.
You mean rationally from an evolutionary point of view? You have less to lose from a bold decision, but perhaps you have much less to gain and that predominates. As a young guy you can take off into the wilds with a young wife and another few couples. Chances might be 90% you’ll be killed, but if you do make it to the new land, you might start a whole new population of people.
I think if you look at deciduous trees of the same species, the young trees get their leaves earlier in the spring than the mature trees. I think I’ve observed that. They’re “gambling”, because a late frost could kill them. But their chances of becoming a mature tree aren’t that great anyway, and they need to grab light before their elders shade them. The older trees can afford to be conservative.
As people in our modern society, there’s some tendency to relax as you get older. Older people encourage you to dance as if no one is watching? Not sure I believe that myself, though. :-)
I think it is a hard question. The foundations of our societies would all be shaken to the core by the sudden resuscitation that doubles the earth population (even assuming as we must that we can feed them all). I don’t think “save or prolong any life of reasonable quality” scales up past a certain point. At a certain point the psychological quality of life of living individuals that comes from living in a society with a certain structure and values may trump the right of individuals who thought they were dead to live once more. (Humor: If you’ve been widowed three times, do you really want 3 formerly late husbands showing up at your doorstep? :-))
Thank you so much for the reply! Simply tracing down the ‘berserker hypothesis’ and ‘great filter’ puts me in touch with thinking on this subject that I was not aware of.
What I thought might be novel about what I wrote included the idea that independent evolution of traits was evidence that life should progress to intelligence a great deal of the time.
When we look at the “great filter” possibilities, I am surprised that so many people think that our society’s self-destruction is such a likely candidate. Intuitively, if there are thousands of societies, one would expect a high variability in social and political structures and outcomes. The next idea I read, that “no rational civilization would launch von Neuman probes” seems extremely unlikely because of that same variability. Where there would be far less variability is mundane constraints of energy and engineering to launch self-replicating spacecraft in a robust fashion. Problems there could easily stop every single one of our thousand candidate civilizations cold, with no variability.
LWers are almost all atheists. Me too, but I’ve rubbed shoulders with lots of liberal religious people in my day. Given that studies show religious people are happier than the non-religious (which might not generalize to LWers but might apply to religious people who give up their religion), I wonder if all we really should ask of them is that they subscribe to the basic liberal principle of letting everyone believe what they want as long as they also live by shared secular rules of morality. All we need is for some humility on their part—not being totally certain of their beliefs means they won’t feel the need to impose their beliefs on others. If religious belief is how they find meaning in a life (that, in my opinion, has no absolute meaning), why rock their boats?
This must have been discussed many, many times. Pointers to relevant discussions, either within or outside LW, appreciated.
Yes, it was vague. I’ll try to be more precise—as much as I can.
Suppose we do a pilot experiment in a small region on the Tigris and Euphrates where people have been living in high population densities for a long time. We have large numbers of people coming back from the dead, perhaps 10 times the current population? Perhaps with infant mortality we have 5 times as many children as adults—lots of infants and young children.
But the UN is ready, prepared in advance. There is land for everyone. We figure at least that the dead have lost the right to their property, so we put them all up in modular housing we make outside the present city.
But there are so many formerly dead, from older linguistic and cultural and religious groups, that they form their own political parties and take over the government.
I could go on, but it’s apparent to me that the social order is completely messed up. Now suppose I’m an Egyptian, and it comes to a vote: Do we want to implement this program in Egypt? Assuming that the as-yet-unresurrected dead don’t get a vote, I can see the proposal being voted down overwhelmingly.
My moral intuition is that the Egyptians have no moral obligation to resurrect their ancestors. They have a right to continue their ways of existence.
Of course, this is an extreme thought experiment, and arguing about details won’t be productive.
I have a similar intuition about, say unrestricted immigration. If someone said that utility would be maximized if anyone could move anywhere on earth they wanted, I have an intuition that I as an American have a right to resist that. The status quo has some weight.
Applying rationality to problems can go too far. In the late 19th and early 20th centuries, a lot of very smart, very thoughtful, very knowledgeable people thought Communism was going to be a great idea. But due to a few slip-ups and miscalculations, it turned out it wasn’t—which we can see with hindsight. No, they didn’t have modern notions of rationalism, but they had the best thinking of their day.
A truism is that if the only tool you have is a hammer, everything looks like a nail. It’s easier to compute utility on the level of individuals. You can spin a story based on that about what society should look like, but I think you might be biased by the fact that your tool can apply. If the alternative is, “My tools don’t have anything to say on that issue because of complex interactions among people and the entire fabric of society”, then you would be biased to reject that alternative.
I know this brings up a lot of issues, some of which should be considered separately. And I am ignorant of a lot of LW work. Pointers to other work welcome.
It seems that implicit in any discussion of the kind is, “What do you think I ought to do if you are right?”.
For theists, the answer might be something leading to, “Accept Jesus as your personal savior”, etc.
For atheists, it might be, “Give up the irrational illusion of God.” I’m questioning whether such an answer is a good idea if they are at least humble and uncertain enough to respect others’ views—if their goal is comfort and happiness as opposed to placing a high value on literal truth.
But do recall, I’m placing this in the “stupid questions” thread because I am woefully ignorant of the debate and am looking for pointers to relevant discussions.
I remain quite confused.
In fact, it is totally unfair of you to assume that having this conversation is so pressing that it goes without saying. After all, not all theists proselytize.
OK. This seems to imply that there is some serious downside about starting such a conversation. What would it be? It would seem conciliatory to theists, if some (naturally enough) assume that what atheists want is for them to embrace atheism.
I’ll say only that I’m not convinced that believing unpleasant but truth things is inherently inconsistent with being happier.
I hope I’ve parsed the negatives correctly: Certainly believing unpleasant but true things is highly advantageous to being happier if it leads to effective actions (I sure hope that pain isn’t cancer—what an unpleasant thing to believe… but I’ll go to the doctor anyway and maybe there will be an effective treatment). If it means unpleasant things that can’t be changed, then that’s not inherently inconsistent with being happier either, for instance if your personal happiness function includes that discovering that you are deceiving yourself will make you very unhappy.
The question is more whether it is a valid choice for a person to say they value pleasant illusions when there is no effective way to change the underlying unpleasant reality.
We object when someone else wants to infringe on our liberties (contraception, consensual sexual practices), and my suggestion was that a mild dose of doubt in one’s faith might be enough to defang efforts to restrict other people’s liberties.
I knew a devout Catholic who was also a devout libertarian, and his position on abortion was that it was a grave sin, but it should not be illegal. I’m not sure if that position required a measure of doubt about the absolute truth of Catholicism, but it seems possible.
I stumbled here while searching some topic, and now I’ve forgotten which one. I’ve been posting for a few weeks, and just now managed to find the “About” link that explains how to get started, including writing an intro here. Despite being a software engineer by trade these past 27-odd years, I manage to get lost navigating websites a lot, and I still forget to use Google and Wikipedia on topics. Sigh. I’m 57, and was introduced to cognitive fallacies years as long ago as 1972. I’ve tried to avoid some of the worst ones, but I also fail a lot. I kept a blog with issue-related essays for a while, and whatever its shortcomings, I was proud of the fact that when I ran out of thing to say, I stopped posting. With the prospect of a community like this one that might respond substantively, maybe I’ll be inspired to write more here.
This description of a guy who believed in objective morality but lost his faith impressed me a lot. That’s me. I don’t think there’s any very compelling reason to live one’s life in a particular way, or any real reason that some actions are preferable to others. That might be called nihilism. I live a decent life, though, because I’m happier pretending not to be a nihilist and making moral arguments and living honorably and all. But when the going gets tough (as in unpleasant consequences to some line of thought that doesn’t make me happy), I always have the option of shrugging my shoulders, yawning, and going on to the next topic. Rationality too is a fun tool. I find it most helpful within the relatively small questions of life.
Maybe I’m missing something.
I’m not saying my behavior is random, or un-caused. I experience preferences among actions. Factors I’m unaware of undoubtedly play a part, something I can speculate on, and others as well, and I or they could try to model them. But as I experience reality, I’m only striving up to a point to do the Right Thing. My speculation is that if the cost exceeds the cost of reminding myself I’m actually a nihilist, I’ll bail on morality.
I’m very interested in arguments as to why nihilism isn’t a consistent position—heck, even why it’s not a good idea or how other people have gotten around it.
I’ve been aware of the concept of cognitive biases going back to 1972 or so, when I was a college freshman. I think I’ve done a decent job of avoiding the worst of them—or at least better than a lot of people—though there is an enormous amount I don’t know and I’m sure I mess up. Less Wrong is a very impressive site for looking into nooks and crannies and really following things through to their conclusions.
My initial question is perhaps about the social psychology of the site. Why are two popular subjects here (1) extending lifespan, including cryogenics, (2) increasingly powerful AIs leading to a singularity. Is there an argument that concern for these things is somehow derivable from a Bayesian approach? Or is it more or less an accident that these things are of interest to the people here?
Examples of other things that might be of interest could be (a) “may I grow firmer, quieter, warmer” (rough paraphrase of Dag Hammarskjold), (b) I want to make the very best art, (c) economics rules and the key problem is affording enough for everyone. I’m not saying those are better, just that they’re different. Are there reasons people here talk about the one set and not the other?