# Irrationality Game III

The ‘Irrationality Game’ posts in discussion came before my time here, but I had a very good time reading the bits written in the comments section. I also had a number of thoughts I would’ve liked to post and get feedback on, but I knew that being buried in such old threads not much would come of it. So I asked around and feedback from people has suggested that they would be open to a reboot!

I hereby again quote the original rules:

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it’s all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here’s an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like ‘fairly confident’.

Example (not my true belief): “The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%).”

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What ‘basically’ means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it’s a pretty big difference of opinion. If they’re at 99.9% and you’re at 99.5%, it could go either way. If you’re genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That’s the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn’t incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post “I believe in God”. Don’t pick nits and say “Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us...” and downvote it. That’s cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying “I believe in God. 99% sure.” isn’t informative because we don’t quite know which God you’re talking about. A deist god? The Christian God? Jewish?

Y’all know this already, but just a reminder: preferences ain’t beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word “should” are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It’s a challenge!

• Generally, no repeating an altered version of a proposition already in the comments unless it’s different in an interesting and important way. Use your judgement.

• If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post.

• Don’t post propositions as comment replies to other comments. That’ll make it disorganized.

• You have to actually think your degree of belief is rational. You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average. This could be good or bad. Lots of upvotes means lots of people disagree with you. That’s generally bad. Lots of downvotes means you’re probably right. That’s good, but this is a game where perceived irrationality wins you karma. The game is only fun if you’re trying to be completely honest in your stated beliefs. Don’t post something crazy and expect to get karma. Don’t exaggerate your beliefs. Play fair.

• Debate and discussion is great, but keep it civil. Linking to the Sequences is barely civil—summarize arguments from specific LW posts and maybe link, but don’t tell someone to go read something. If someone says they believe in God with 100% probability and you don’t want to take the time to give a brief but substantive counterargument, don’t comment at all. We’re inviting people to share beliefs we think are irrational; don’t be mean about their responses.

• No propositions that people are unlikely to have an opinion about, like “Yesterday I wore black socks. ~80%” or “Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%.” The goal is to be controversial and interesting.

• Multiple propositions are fine, so long as they’re moderately interesting.

• You are encouraged to reply to comments with your own probability estimates, but comment voting works normally for comment replies to other comments. That is, upvote for good discussion, not agreement or disagreement.

• In general, just keep within the spirit of the game: we’re celebrating LW-contrarian beliefs for a change!

I would suggest placing *related* propositions in the same comment, but wildly different ones might deserve separate comments for keeping threads separate.

Make sure you put “Irrationality Game” as the first two words of a post containing a proposition to be voted upon in the game’s format.

Here we go!

EDIT: It was pointed out in the meta-thread below that this could be done with polls rather than karma so as to discourage playing-to-win and getting around the hiding of downvoted comments. If anyone resurrects this game in the future, please do so under that system If you wish to test a poll format in this thread feel free to do so, but continue voting as normal for those that are not in poll format.

• Irrationality Game: Less Wrong is simply my Tyler Durden—a disassociated digital personality concocted by my unconcious mind to be everything I need it to be to cope with Camusian absurdist reality. 95%.

• I am very curious as to what your evidence for backing up this proposition is or would be.

• If you’re right, your unconscious mind is awesome.

• So, I’m supposed to upvote this unless I believe that I am a figment of your imagination? This seems a bit cheaty. Maybe I should post “No one other than ThisSpaceAvailable believes that this statement is true. 99.99%”

• Are we supposed to judge the rationality of this statement based on how rational we think it is for you to believe it, or based on how rational we think it is for us to believe it?

• Wow. I think this one might win.

• Irrationality Game: (meta, I like this idea)

Flush toilets are a horrible mistake. 7b/​99%

• “If it’s yellow let it mellow, if it’s brown flush it down.”

This is one of the first things I remember learning, growing up with tank water.

• Based on what reasoning?

• Upvoted for “horrible”. I don’t see how their impact is all that bad—at 3.5 GPF (which is standard), that means that, for example, all of the flush toilets in California together use about 750,000 acre-feet of water per year. Compared to the 34 million acre feet used in the same state for agriculture, it’s clear that flush toilets use a significant but still pretty small fraction of the water in the state, but “horrible” is an overstatement. (I choose California because it is a populous state that regularly has water shortages).

• I admit to hyperbole, now, with a little more thought, I would have worded it differently. Both to clarify that it’s pretty far down on our list of societal problems, and that it’s more an individual level mistake rather than a systematic one (though there are systematic benefits to fewer flush toilets).

• How is this not just a preference?

• I suppose my actual belief is that flush toilets are a mistake outside of urban areas, I don’t have much experience with urban living or what other poop strategies could work with it.

• Provide easy long distance transport of human waste in urban environments.

• Exchanges weekly-to-yearly chores for purchased services.

• Create additional dependency on water (and by extension outside water districts, electricity).

• Turn (vast amounts of) drinking water into black water.

• Create a waste product from human manure, which is a valuable resource (fertile soil) when dealt with properly.

• Adds significantly to the cost of housing (especially outside sewer districts).

• I still think you’re overconfident, so upvoting, but the justification is convincing enough to make me update from near-zero to something noticeably above zero. I never thought of it quite that way.

• Translate it to “In x% of new non-urban houses, there are options better than flush toilets.” My confidence in my confidence assignment isn’t very high yet though, so I am quite open to being overconfident.

And obviously both lists are non-exhaustive.

• Flush toilets handle large numbers of people for a long time fairly easily.

• Flush toilets get clogged.

• You forgot the “not having the whole village die from a cholera outbreak” part :-/​

• The running water revolution came around the same time as the sanitation revolution. I’m not 100% sure you necessarily HAVE to have one to have the benefits of the other, though it helps. Modern composting toilets and hot-composting of human manure is quite safe if done properly. Flush toilets definitely get the sanitation thing done, but perhaps rather than ‘mistake’ we could call them an ‘inefficient first draft that still works well’.

• is quite safe if done properly

That’s the thing—it’s basically an issue of idiot-proofing. Many things are “safe if done properly” and still are not a good idea because people in general are pretty bad at doing things properly.

Flush toilets are idiot-proof to a remarkable degree. Composting human manure, I have my doubts.

• Flush toilets are idiot-proof to a remarkable degree.

Almost all of them are, but I’ve seen a couple of them which are very easy to accidentally flush improperly in such a way that water will keep on running until someone notices and fixes it.

• That clears things up a lot, and I changed my downvote to an upvote. EDIT: To be clear, I disagree with you.

• Flush toilets do create additional dependency on water, however if one already has running water and depends on it for drinking and washing, how significant is the additional water dependency for flush toilets?

• The reason flush toilets use potable water is an economic one. It is simply cheaper to use one unified water system instead of two, when someplace already has running water. The cost of the wasted drinking water is negligible compared to the cost of building a second plumbing system.

• This point is the most interesting to me. I have no information on the usefulness of human manure, and would be interested to know if human manure would have a comparable market value to cattle manure or synthetic fertilizer. I am skeptical because of the tendency for human waste to carry human diseases.

• I have no disagreements with this disadvantage, but simply feel that the vast, vast majority of people would be willing to pay for the extra cost in housing if they already had indoor plumbing.

• I expect this is too expensive to be worth it. but instead of a whole second water system, it’s theoretically possible to use gray water from bathing and showering for flushing.

On second thought, this might actually make sense for apartment buildings and hotels, since some gray water could be stored and sent downhill for flushing—you wouldn’t need a pump in the bathroom.

• Austin’s “Dillo Dirt” is made from yard waste and treated human sewage. Less-treated sewage gets used to fertilize ranchland. As you suspected, there’s more than a little controversy over whether the result is well-composted enough for health and aesthetics, but it’s mixed up with concern over the standards for various non-fecal pollutants. Presumably whatever closed loop fertilization trist is advocating wouldn’t have to worry so much about the various kinds of industrial and medical waste people dump down their drains.

• I changed my downvote to an upvote.

I think you’re playing it wrong? You upvote if you disagree.

• I do disagree. Did you read the rest of my comment? I originally downvoted because the rules also say to downvote if someone expresses a preference disguised as a belief.

• Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.

• By “any subcomponent,” do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity? Because, if the universe is indeed spatially infinite, that means that the set of conscious entities is the infinity of the continuum; and I’m really confused by what that does to anthropic reasoning.

• By “any subcomponent,” do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity?

If you replace consciousness with subjective experience I believe your statement is correct. Also once you have one infinity you can take power sets again and again.

I’m really confused by what that does to anthropic reasoning

As far as I understand it breaks anthropic reasoning because now your event space is to big to be able to define a probability measure. For the time being I have concluded that anthropic reasoning doesn’t work because of a very similar argument though I will revise my argument once I have learned the relevant math.

• How would one define subjective experience for rocks and atoms?

• Defining subjective experience is hard for the same reason that defining red is hard, since they are direct experiences. However in this case I can’t get around this by pointing at examples. So the only thing I can do is offer an alternative phrasing which suffers from the same problem:

If you accept that our experiences are what an algorithm feels like from on the inside then I am saying that everything feels like something from the inside.

• Besides the issue of “subjective experience” that has already been brought up, there’s also the question of what “thing” and “exists” mean. Are abstract concepts “things”? Do virtual particles “exist”? By including ideas, you seem to be saying “yes” to the first question. So do subjective experiences have subjective experiences themselves?

Also, it’s “an aforementioned”. That’s especially important when speaking.

• Besides the issue of “subjective experience” that has already been brought up, there’s also the question of what “thing” and “exists” mean.

I believe some form of MUH is correct so when I say exist I mean the same thing as in mathematics (in the sense of quantifying over various things). So by a thing I mean anything for which it is (at least in principle) possible to write down a mathematically precise definition.

Presumably abstract ideas and virtual particles fall under this category though in neither case am I sure because I don’t know what you mean by abstract idea/​I don’t know enough physics. I not sure whether it possible to give a definition for subjective experience so I don’t know whether subjective experiences have subjective experiences.

Also, it’s “an aforementioned”. That’s especially important when speaking.

Substituted an a for an an.

• Irrationality game:

Most posthuman societies will have violent death rate much higher than humans ever had. Most poshumans who will ever live will die at wars. 95%

• Interesting. So, you have Robin Hanson’s belief that we won’t get a strong singleton; but you lack his belief that emulated minds will be able to evaluate each other’s abilities with enough confidence that trade (taking into account the expected value of fighting) will be superior to fighting? That’s quite the idiosyncratic position, especially for 95% confidence.

• Not a bad hypothesis, but your confidence level is too high… hence my upvote.

• Are you imagining that outcome because you expect resource shortages? Peaceful lives are just too boring? Posthumans are generally too alien to each other for stable cooperation?

Now that I think about it, I find the last alternative pretty plausible.

• Seems fairly reasonable on its face, actually; once you’ve gotten rid of disease and age, what’s left is accidents, violence, and suicide if you’re counting that separately from violence.

Upvoted, though, because I think you’re undercounting accidents (more Americans already die in automotive accidents than die violently, by a large margin; I’d expect the same is true for the rest of the First World but haven’t seen statistics) and making too strong a statement about the structure of posthuman society.

• i think the concept of death is extremely poorly defined under most variations of posthuman societies; death as we interpret it today depends on a number of concepts that are very likely to break down or be irrelevant in a post-human-verse

take, for example, the interpretation of death as the permanent end to a continuous distinct identity:

if i create several thousand partially conscious partial clones of myself to complete a task (say, build a rocketship), and then reabsorb and compress their experiences, have those partial clones died? if i lose 99.5% of my physical incarnations and 50% of my processing power to an accident, did any of the individual incarnations die? have i died? what if some other consciousness absorbs them (with or without my, or the clones’, permission or awareness)? what if i become infected with a meme which permanently alters my behavior? my identity?

• You (the reader) do not exist.

EDIT: That was too punchy and not precise. The reasoning behind the statement:

Most things which think they are me are horribly confused gasps of consciousness. Rational agents should believe the chances are small that their experiences are remotely genuine.

EDIT 2: After thinking about shminux’s comment, I have to retract my original statement about you readers not existing. Even if I’m a hopelessly confused Boltzmann brain, the referent “you” might still well exist. At minimum I have to think about existence more. Sorry!

• Cogito, ergo upvoto. :-)

• To quote the adage, I’m a solipsist, and am surprised everyone else isn’t too. I think any intelligent agent should conclude that it is probably something akin to a Boltzmann brain. You could plausibly argue that I am cheating with pronoun references (other people might agree with the solipsistic logic, but centered around them). Is that what you are asking?

EDIT

Is there anything about the world that you expect would appear different to you because of this belief?

Not really. I think some of the problems with AIXI may be AIXI acting rationally where the desired behavior is irrational, but that’s the only time I can think of it coming up outside of a philosophy discussion.

• I often use the concept of Boltzmann brain to relax or fall asleep. Thinking that this is the only moment you will ever get to feel alive and you will die a few moments from now is a good way to put your mind “in the now”. That said, if it actually were true I would expect the reality I perceive to be radically different. Almost everything I know about the outside world is really consistent and ordered and everything I’ve ever experienced supports the mainstream physical model of the universe. I don’t think there would be an entire history of the universe and Earth and such, which I’m able to confirm relatively well by going into the museum and and considering the evidence, if this were just a random fluke. I would expect many things to be far more incoherent.

I still think there’s a low chance it’s true. Not really low chance, the chance is probably higher than the chance that I will win in a lottery or that biblical God exists. And this belief doesn’t have much decision theoretic importance so I would probably ignore it even if I knew for sure that it’s true.

Btw, how do you resolve the paradox that you can’t trust your own senses and reasoning?

You could plausibly argue that I am cheating with pronoun references (other people might agree with the solipsistic logic, but centered around them). Is that what you are asking?

This game assumes that users actually are real people because otherwise asking about their opinions would be pointless. But now that you explained it I decided to change my downvote to upvote because I think the probability of this being true is low.

• Could you be more specific about what you mean by that?

• Of all possible minds thinking the thought that I am thinking right now, most aren’t living on earth, posting to Less Wrong. Most are random fluctuations in high-entropy soup after the heat-death of the universe, or bizarre minds capable of belief but not thought, or other deluded or confused agents. In all but a negligible fraction of these, you, maia, do not exist.

• What degree of certainty do you place on that belief?

• I could put numbers to it, but it would really be pulling them out of my butt—how certain are you that anthropic reasoning is valid? If it is valid (which seems more likely than not), then you quickly run into the problem of Boltzmann brains. Some people try to exorcise Boltzmann brains from their anthropic viewpoint, but I have no problem with biting the bullet that most brains are Boltzmann brains. The practical implications of that belief, assuming the world is as it appears to be, are (I believe) minimal.

• LessWrong member for [at least a few] months. Guys, it checks out.

… is what I would say if this was reddit.

• Irrationality Game: I am something ontologically distinct from my body; I am much simpler and I am not located in the same spacetime. 50%

EDIT: Upon further reflection, my probability assignment would be better represented as the range between 30% and 50%, after factoring in general uncertainty due to confusion. I doubt this will make a difference to the voting though. ;)

• Why would you be much simpler if you were ontologically distinct?

• The ”;” was meant to be simply a “and also” rather than a “therefore.”

I think that I’m much simpler than my body, and that is one of the reasons why I think I’m ontologically distinct. With 50% probability.

EDIT: Another answer I endorse: If I’m ontologically distinct from my body, then who knows how complicated I am—but apply Occam’s Razor, such that I’m only as complicated as I need to be, and the result will be that I’m much simpler than my body. Anyone who believes that uploading would preserve consciousness should agree with this, since uploading (can) change the medium of computation to a simpler one.

• Why is this a 19? I thought this was a restatement of the “official LW position”. Or would people argue that an uploaded kokotajlod wouldn’t be the real kokotajlod?

• I guess if you read it loosely. I think the official LW position would be (correct me if I am wrong) an em of kokotajlod that has high enough fidelity to replicate his decision making process is him; what he is is a particular set of hueristics, instincts, etc, that accompany his body but could theoretically exist outside it. That does match his statement if one reads it as refering to something more like a platonic concept than a spiritual essence.

• I think the “official LW position” is more reductionist than my Irrationality Game statement.

Even if people think that I am a computation, they probably don’t think I’m some sort of Platonic Form, but rather that I’m just a certain type of physical object that “implements” a computation.

That’s my understanding of typical LW thought, at any rate. Which is why I chose the statement that I did. :)

• What does “spacetime” mean? Is the real “you” neither a causal descendent, nor a causal ancestor, of any of your body’s actions? I’d have to put that down somewhere around argmin probability.

Or do you just mean that you consider the real you to be something like a platonic computation, which your material body instantiates? That’s not too far off from some realms of LW semi-orthodoxy.

• Good questions. I’ll explain my reasoning:

Basically, after thinking about consciousness for a while, and personal identity, I’ve come to assign high probability to some sort of dualism/​idealism being true. It might still be a sort of reductionist dualism, i.e. platonic computations.

So yes, the “platonic computation” theory would count. Do you think my original post ought to be revised given this information? I hope I haven’t been misleading.

As for spacetime and causation: If I’m a platonic form, I’m not in spacetime, nor am I causally related to my body in any normal sense. It all depends on how we define causation, and I tend to be reductionist/​eliminativist about causation.

• I hope I haven’t been misleading.

I don’t think you’ve been any more misleading than a dualist is pretty much required to be. The basic ambiguities of dualism do, of course, remain:

1. How does the non-spacetime stuff produce subjective experience, when spacetime stuff can’t?

2. How does your subjective experience correlate with the environment and actions of your material body, just as if there were two-way causation going on? (even when you reduce causation to a Pearl-style net, or to the large-scale behavior of many individually time-reversible components, this question remains).

• (1) It’s not about producing subjective experience, it is about being subjective experience. The idea here is that massive, vague mereological fusions of subatomic stuff just aren’t the sort of thing that can be subjective experiences. Just as no fundamental particle can be a chariot, since chariots have parts.

(2) I have no idea yet. I’m considering some sort of interactionist dualism, or some sort of idealism, or some sort of mathematical multiverse theory with self-contained mathematical structures that play the role of Platonic computations, with measure assigned by a simplicity weighting that generates the appearance of a physical world.

And of course I’m considering reductionist physicalism, reductionist mathematical multiverse theory, etc. as well. That’s where the other 50% comes in.

• Taboo “I”. For all the ways of interpreting that claim that I can come up with, I’d give a probability either much, much higher or much, much lower than 50%.

• Could you list the ways? I’m interested to hear which ways you think would give a probability much higher than 50%.

Also, telling me to taboo “I” is telling me to give a successful analysis of consciousness; if I could already do that, I wouldn’t assign 50% probability to it being one thing, and 50% probability to it being another.

• I mean, sometimes by the word “I” I mean my System 2 (as in “I’m not sure what the effect of stricter gun control on the homicide rate would be”), sometimes I mean my System 1 (as in “I’m not scared of spiders”), sometimes I mean my body (as in “I’ve got a backache”), sometimes I mean my car (as in “I’m parked over there”), etc. Which one did you mean there?

...Oh. Now I can see a reasonable non-tautological, non-tautologically false interpretation of your entry.

• Irrationality game—there is a provident, superior entity that is in no way infinite (I wonder if people here would call that God. As a “superman theist” I had to put “odds of God (as defined in question)” at 5% but identify as strongly theist in the last census)

Edit: forgot odds. 80%

• I was brought up Catholic, and quickly decided religion (later updated to human scribes millennia ago and blind faith therein) didn’t really understand the difference between “bigger than I can understand” and “infinite.” I also have a life so cartoonishly awesome (let me know if you have a solution to this, but I honestly believe if I laid down the facts people would think I’m lying), I figured what I called God not only exists but likes me more than everybody else. As I grew up, I “tested” the theory a few times, but never with any scientific rigor, and I think I’d have to call the results positive but not statistically significant. I have no problem assuming no god at the beginning of a discussion, and if I had strong enough evidence I’d like to think I’d admit I’m wrong. I also don’t correlate anything about God with misunderstanding what “death” means—or as many Catholics call it life after death.

I know it’s a minority view here and would never trot it out in normal discourse, but it seemed appropriate for the venue.

• I’m intruiged—a life so awesome that it’s implausible for a member of an internet forum with thousands of members (especially this one, which is dedicated to the “science of winning at life” and has an average IQ so high it should be impossible) to actually be living it strikes me as a high bar, and additionally the idea of it being so awesome that it’s (at least subjectively) convincing evidence for the hypothesis “god is real and he loves me”. The only thing that comes to my mind that looks like it meets that criteria and isn’t blatantly supernatural would be winning the lottery, but since lottery winners are often less happy in the end I don’t expect that’s it.

I can’t promise I’ll believe you, but I’d upvote just for sating my curiosity.

• Nah, not implausible I exist, but I rarely post, so have no track record. It’s amazing how many people are above average online...

Mentally, I’m materially above average intelligence, but understand that that only goes so far. And I cultivate rationality (I’m here aren’t I?)

Socially, I’m reasonably well liked by everyone I know, people tell me I have a decent sense of humor. I’m engaged to a beautiful blonde doctor, who is eerily similar to the woman I prayed to meet as a teenager, and has been able to put up with my strangeness for four years.

Bodily, I have no known history of any genetic diseases and have never been dependant on prescription drugs. Though I admit the surgeon general would like me to lose a pound or two. Not “Mommy why’s he like that?” fat though.

Financially, I own my own house, and if I (and they) decide to have kids, my kids and grandkids will never have to work, assuming I don’t earn/​inherit/​win/​cetera a penny, and my stocks gain 0% (Admittedly, they could crash). I tell people I have a Forrest Gump approach. “Lt. Dan said I didn’t have to worry about money any more. And I said, well, that’s good. One less thing.”

Attitudinally, I’m hugely optimistic. Not every day, but more often than not, I wake up and am struck by the wonder of how unlikely my good fortune is.

I know it sounds out there, and it is, but it’s also true. Hand to God. Or Bacon, or whomever you’d like, if you dig propriomanual verification.

• The only parts of that list that seem out-there are the fiancee-eerily-similar-to-prayers (alternate explanations: The kind of people you actively seek out and the kind of people you pray to meet are going to resemble each other, human memories are fallible to the point that they can be completely fabricated, so a vague similarity might adjust to become an eerily close similarity) and the financial status (I’m not sure what my prior for this is supposed to be, since “rich enough never to have to work for three generations” varies a lot on expenditure, probably more than just the top 1% could get there if they were frugal).

• Oh agreed. My awesome life is not a good proof, but while I came with a high IQ out of the box, I hadn’t learned the tools of thinking yet to the necessary degree. It’s loosely confirmatory, but not a silver bullet. I was just saying that prompted me to have the idea a decade or two before (inadequitely still, but I knew it) testing the idea. My confidence may be too high, but really, it hasn’t been a priority to test mostly because I can’t think of a good one that doesn’t come at too high a cost for too little benefit. I’ve never really tried to prove my Fiancé′s aunt who I’ve never met exists either. Open to ideas.

And the grandkids thing is something I came up with to give my fiancé perspective when she was just my girlfriend. Anything with “illion” in it becomes “a bunch” to non math folk. Think about 4 mil. Now think about 8. I’d say (based on nothing but anecdote) if you said each to fifty men-on-the-street, you’d have at least 85 people thinking of pretty much the same pile of gold doubloons.

• Commit to not talk to anyone about the results of your test. No hinting, nothing. If you do this the experiment is worthless. Don’t even mention it; if asked about it, say ‘I promised beforehand that I would not give out information’. Take a coin, flip it twenty times or so, record heads and tails. This is not a good test but it’s simple and easy and can at least theoretically provide some information.

• I agree with RomanE in that this doesn’t seem all that unusual. I’m an undergrad in college right now so I don’t have the monetary security, fiancee, or house, but everything else applies to me as well. There are a couple of things that could help to explain this, in my cause and probably in yours.

Are you fairly neurotypical in a way that doesn’t interfere with your social life or physical well-being? Did you grow up with a middle, upper middle, or upper class family? Do you grow up in a developed nation? Are you of a racial class that is generally privileged in your area? (E.g., white in America.)

If these are true, or even just the first three, I would say it’s not all that unusual. Not lower than 5%, anyway. I don’t think that, given the above, it is unusual enough to warrant an explanation outside the ordinary.

• I figured what I called God not only exists but likes me more than everybody else.

That’s impossible, He likes me more than everybody else. (.000001% confident)

(Seriously though, I do believe there are god-like phenomena, and they seem to be suspiciously favorable toward me. (Excuse me while I knock on wood and stare plaintively at the fourth wall.))

• I too am curious.

• What are your odds on this?

• The universe is finite, and not much bigger than the region we observe. There is no multiverse (in particular Many Worlds Interpretation is incorrect and SIA is incorrect). There have been a few (< million) intelligent civilisations before human beings but none of them managed to expand into space, which explains Fermi’s paradox. This also implies a mild form of the “Doomsday” argument (we are fairly unlikely to expand ourselves) but not a strong future filter (if instead millions or billions of civilisations had existed, but none of them expanded, there would be a massive future filter). Probability: 90%.

• I don’t know how to vote on this. I have very strong suspicions that MWI is incorrect (its Copernican allure is its only favorable point), but I disagree that the universe is finite. I feel inclined toward SIA, but I generally reject anthropic reasoning (that’s perhaps a statement about myself rather than about your arguments).

(Also, I require more detailed arguments to dissolve Fermi’s paradox because I don’t believe paradoxes exist in reality.)

• I have very strong suspicions that MWI is incorrect

How would you evaluate correctness of something untestable?

• I don’t know whether this counts as a correctness assessment, but my expectations do not vary with the trueness of MWI, so it’s a needless hypothesis.

• I’d suggest that since you agree with some parts but disagree with others, you assign probability a lot less than 90% to the whole hypothesis. So you should think I’m irrationally overconfident in the whole lot, and upvote please!

If you want some detail, I start from the “Great Filter” argument (see http://​​hanson.gmu.edu/​​greatfilter.html). I find it very hard to believe that there is a super-strong future filter ahead of us (such that we have < 1 in a million or < 1 in a billion chance of passing it and then expanding into space). But a relatively weak filter implies that rather few civilizations can have got to our stage of development—there can’t have been millions or billions of them, or some would have got past the filter and expanded, and we would not expect to see the world as we do in fact see it. The argument to the universe being finite (and not too big) then follows from there being a limited number of civilizations. SIA and MWI must also be wrong, because they each imply a very large or infinite number of civilizations.

• Your conclusion doesn’t follow from your premises. The lack of a strong filter implies that a not insignificant proportion of civilizations colonize space. This is consistent with there being a large universe containing many intergalactic civilizations we will never observe because of the expansion of the universe.

• No, in that large universe model we’d expect to be part of one of the expanded, intergalactic civilisations, and not part of a small, still-at-home civilisation. So, as I stated “we would not expect to see the world as we do in fact see it”. Clearly we could still be part of a small civilisation (nothing logically impossible about being in a tiny minority), or we could be in some sort of zoo or ancestor simulation within a big civilisation. But that’s not what we’d expect to see. You might want to see Ken Olum’s paper for more on this: http://​​arxiv.org/​​abs/​​gr-qc/​​0303070

Incidentally, Olum considers several different ways out of the conflict between expectation and observation: the finite universe is option F (page 5) and that option seems to me to be a lot more plausible than any of the alternatives he sketches. But if you disagree, please tell me which option you think more likely.

• I find that sort of anthropic argument to Prove Too Much. For instance, our universe is about 14 billion years old, but many models have the universe existing trillions of years into the future. If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe’s lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years. We can also reject the hypothesis that more than 4 trillion humans lives will take place, that any given 1-year-old will reach the age of 20, that humans will have machines capable of flight for more than 2000 years, etc.

Olum appears to be making a post hoc argument. The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn’t mean I think I need a new model. The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What’s so special about me, that I should reject the possibility that I such a person?

• If the universe were to survive 280 billion years, then that would put us within the first 5% of the universe’s lifespan. So, if we take an alpha of 5%, we can reject the hypothesis that the universe will last more than 280 billion years.

That sounds like “Copernican” reasoning (assume you are at a random point in time) rather than “anthropic” reasoning (assume you are a random observer from a class of observers). I’m not surprised the Copernican approach gives daft results, because the spatial version (assume you are at a random point in space) also gives daft results: see here in this thread point 2.

Incidentally, there is a valid anthropic version of your argument: the prediction is that the universe will be uninhabitable 280 billion years from now, or at least contain many fewer observers than it does now. However, in that case, it looks like a successful prediction. The recent discovery that the stars are beginning to go out and that 95% of stars that will ever form have formed already is just the sort of thing that would be expected under anthropic reasoning. But it is totally surprising otherwise.

We can also reject the hypothesis that more than 4 trillion humans lives will take place

The correct application of anthropic reasoning only rejects this as a hypothesis about the average number of observers in a civilisation, not about human beings specifically. If we knew somehow (on other grounds) that most civilisations make it to 10 trillion observers, we wouldn’t predict any less for human beings.

that any given 1-year-old will reach the age of 20,

That’s an instance of the same error: anthropic reasoning does NOT reject the particular hypothesis. We already know that an average human lifespan is greater than 20, so we have no reason to predict less than 20 for a particular child. (The reason is that observing one particular child at age 1 as a random observation from the set of all human observations is no less probable if she lives to 100 than if she lives to 2).

The probability that the right sperm would fertilize the right egg and I would be conceived is much less than 1 in a billion, but that doesn’t mean I think I need a new model

Anthropic reasoning is like any Bayesian reasoning: observations only count as evidence between hypotheses if they are more likely on one hypothesis than another. Also, hypotheses must be fairly likely a priori to be worth considering against the evidence. Suppose you somehow got a precise observation of sperm meeting egg to make you, with a genome analysis of the two: that exact DNA readout would be extremely unlikely under the hypothesis of the usual laws of physics, chemistry and biology. But that shouldn’t make you suspect an alternative hypothesis (e.g. that you are some weird biological experiment, or a special child of god) because that exact DNA readout is extremely unlikely on those hypotheses as well. So it doesn’t count as evidence for these alternatives.

The probability of being born prior to a galactic-wide expansion may be very low, but someone has to be born before the expansion. What’s so special about me, that I should reject the possibility that I such a person?

If all hypotheses gave extremely low probability of being born before the expansion, then you are correct. But the issue is that some hypotheses give high probability that an observer finds himself before expansion (the hypotheses where no civilisations expand, and all stay small). So your observations do count as evidence to decide between the hypotheses.

• I got a bit distracted by the “anthropic reasoning is wrong” discussion below, and missed adding something important. The conclusion that “we would not expect to see the world as we in fact see it” holds in a big universe regardless of the approach taken to anthropic reasoning. It’s worth spelling that out in some detail.

1. Suppose I don’t want to engage in any form of anthropic reasoning or observation sampling hypothesis. Then the large universe model leaves me unable to predict anything much at all about my observations. I might perhaps be in a small civilisation, but then I might be in a simulation, or a Boltzmann Brain, or mad, or a galactic emperor, or a worm, or a rock, or a hydrogen molecule. I have no basis for assigning significant probability to any of these—my predictions are all over the place. So I certainly can’t expect to observe that I’m an intelligent observer in a small civilisation confined to its home planet.

2. Suppose I adopt a “Copernican” hypothesis—I’m just at a random point in space. Well now, the usual big and small universe hypotheses predict that I’m most likely going to be somewhere in intergalactic or interstellar space, so that’s not a great predictive success. The universe model which most predicts my observations looks frankly weird… instead of a lot of empty space, it is a dense mass of “computronium” running lots of simulations of different observers, and I’m one of them. Even then I can’t expect to be in a simulation of a small civilisation, since the sim could be of just about anything. Again, not a great predictive success.

3. Suppose I adopt SIA reasoning. Then I should just ignore the finite universes, since they contribute zero prior probability. Or if I’ve decided for some reason to keep all my universe hypotheses finite, then I should ignore all but the largest ones (ones with 3^^^3 or more galaxies). Among the infinite-or-enormous universes, they nearly all have expanded civilisations, and so under SIA, nearly all predict that I’m going to be in a big civilisation. The only ones which predict otherwise include a “universal doom”—the probability that a small civilisation ever expands off its home world is zero, or negligibly bigger than zero. That’s a massive future filter. So SIA and big universes can—just about—predict my observations, but only if there is this super-strong filter. Again, that has low prior probability, and is not what I should expect to see.

4. Suppose I adopt SSA reasoning. I need to specify the reference class, and it is a bit hard to know which one to use. In a big universe, different reference classes will lead to very different predictions: picking out small civilisations, large civilisations, AIs, SIMs, emperors and so on (plus worms, rocks and hydrogen for the whackier reference classes). As I don’t know which to use, my predictions get smeared out across the classes, and are consequently vague. Again, I can’t expect to be in a small civilisation on its home planet.

By contrast, look at the small universe models with only a few civilisations. A fair chunk of these models have modest future filters so none of the civilisations expand. For those models, SSA looks in quite good shape, as there is quite a wide choice of reference classes that all lead to the same prediction. Provided the reference class predicts I am an intelligent observer at all then it must predict I am in a small civilisation confined to its home planet (because all civilisations are like that). Of course there are the weird classes which predict I’m a worm and so on—nothing we can do about those—but among the sensible classes we get a hit.

So this is where I’m coming from. The only model which leads me to expect to see what I actually see is a small universe model, with a modest future filter. Within that model, I will need to adopt some sort of SSA-reasoning to get a prediction, but I don’t have to know in advance which reference class to use: any reference class which selects an intelligent observer predicts roughly what I see. None of the other models or styles of reasoning lead to that prediction.

• This sort of anthropic reasoning is wrong. Consider the following experiment.

A fair coin is tossed. If the result is H, you are cloned into 10^10 copies, and all of those copies except one are placed in the Andromeda galaxy. Another copy remains in the Milky Way. If the result is T, no cloning occurs and you remain in the Milky Way. Either way, the “you” in Milky Way has no immediate direct way to know about the result of the coin toss.

Someone, call her “anthropic mugger”, comes to you an offers a bet. She can perform an experiment which will reveal the result of the coin toss (but she hasn’t done it yet). If you accept the bet and the coin toss turns out to be H, she pays you 1\$. If you accept the bet and the coin toss turns out to be T, you pay her 1000\$. Do you accept the bet?

Reasoning along the same lines as you did to conclude there are no large civilizations, you should accept the bet. But this means your expected gain before the coin toss is −499.5\$. So, before the coin toss it is profitable for you to change your way of reasoning so you won’t be tempted to accept the bet.

There’s no reason to accept the bet unless in the cloning scenario you care much less about the copy of you in Milky Way than in the no-cloning scenario. So, there’s no reason to assume there are no large civilizations if the existence of large civilizations wouldn’t make us care much less about our own.

• There are a number of problems with that:

1) You don’t specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/​50? Suppose I am then told I am in the Milky Way… what odds should I now accept on the coin-toss: still 50/​50? If you say 5050 in both cases then you are a “double-halfer” (in the terminology of Sleeping Beauty problems) and you can be Dutch-booked. If you answer other than 5050 in one case or the other, then you are saying there are circumstances where you’d bet at odds different (probably very different) from the physical odds of a fair coin toss and without any context, that sounds rather crazy. So whatever you say, there is a bullet to bite.

2) I am, by the way, quite aware of the literature on Anthropic Decision Theory (especially Stuart Armstrong’s paper) and since my utility function is roughly the average utility for my future copies (rather than total utility) I feel inclined to bet with the SSA odds. Yes, this will lead to the “me” in the Milky Way making a loss in the case of “H” but at that stage he counts for only a tiny slither of my utility function, so I think I’ll take the risk and eat the loss. If I modify my reasoning now then there are other bets which will lead to a bigger expected loss (or even a guaranteed loss if I can be Dutch-booked).

Remember though that I only assigned 90% probability to the original hypothesis. Part of the remaining 10% uncertainty is that I am not fully confident that SSA odds are the right ones to use. So the anthropic mugger might not be able to make \$500 off me (I’m likely to refuse the 1000:1 bet), but he probably could make \$5 off me.

3) As in many such problems, you oversimplify by specifying in advance that the coin is fair, which then leads to the crazy-sounding betting odds (and the need to bite a bullet somewhere). But in the real world case, the coin has unknown bias (as we don’t know the size of the future filter). This means we have to try to estimate the bias (size of filter) based on the totality of our evidence.

Suppose I’m doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails. Then it seems very reasonable that under the “bias towards heads” hypothesis I would expect to be in Andromeda, and if I discover I am not, that counts as evidence for the “bias towards tails” hypothesis. So as I now suspect bias in one particular direction, why still bet on 5050 odds?

• 1) You don’t specify whether the bet is offered to all my copies or just to one of them, or if to just one of them, whether it is guaranteed to be the one in the Milky Way. Or if the one in the Milky Way knows he is in the Milky Way when taking the bet, and so on.

I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the “large civilizations” problem since we know we’re in a small civilization.

Suppose I am offered the bet before knowing whether I am in Andromeda or Milky Way. What odds should I accept on the coin toss: 50/​50?

In your version of the problem the clones get to bet too, so the answer depends on how your utility is accumulated over clones.

So whatever you say, there is a bullet to bite.

If you have a well-defined utility function and you’re using UDT, everything makes sense IMO.

Suppose I’m doubtful about the fair coin hypothesis and have two other hypotheses: heavy bias towards heads or heavy bias towards tails.

It doesn’t change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

• I meant that the bet is offered to the copy in the Milky Way and that he knows he is in the Milky Way. This is the right analogy with the “large civilizations” problem since we know we’re in a small civilization.

Well we currently observe that we are in a small civilisation (though we could be in a zoo or simulation or whatever). But to assess the hypotheses in question we have to (in essence) forget that observation, create a prior for small universe versus big universe hypotheses, see what the hypotheses predict we should expect to observe, and then update when we “notice” the observation.

Alternatively, if you adopt the UDT approach, you have to consider what utility function you’d have before knowing whether you are in a big civilization or not. What would the “you” then like to commit the “you” now to deciding?

If you think you’d care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you’d care about total utility (so the small civs get equal weight regardless) then be carefully that it’s bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://​​lesswrong.com/​​lw/​​fg7/​​sia_fears_expected_infinity/​​).

It doesn’t change anything in principle. You just added another coin toss before the original coin toss which affects the odd of the latter.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts). You can’t create crazy-sounding conclusions with metaphorical coins (i.e. situations where you bet at million to 1 odds, despite knowing that the coin toss was a fair one.)

• If you think you’d care about average utility in that original situation then naturally the small civilisations will get less weight in outcomes where there are big civilisations as well. Whereas if there are only small civilisations, they get all the weight. No difficulties there.

If you think you’d care about total utility (so the small civs get equal weight regardless) then be carefully that it’s bounded somehow. Otherwise you are going to have a known problem with expected utilities diverging (see http://​​lesswrong.com/​​lw/​​fg7/​​sia_fears_expected_infinity/​​).

I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don’t reduce the utility of small civilizations.

A metaphorical coin with unknown (or subjectively-assigned) odds is quite a different beast from a physical coin with known odds (based on physical facts).

I don’t believe in different kinds of coins. They’re all the same Bayesian probabilities. It’s a meta-Occam razor: I don’t see any need for introducing these distinct categories.

• I think that I care about a time-discounted utility integral within a future light-cone. Large civilizations entering this cone don’t reduce the utility of small civilizations.

I’m not sure how you apply that in a big universe model… most of it is lies outside any given light-cone, so which one do you pick? Imagine you don’t yet know where you are: do you sum utility across all light-cones (a sum which could still diverge in a big universe) or take the utility of an average light cone. Also, how do you do the time-discounting if you don’t yet know when you are?

My initial guess is that this utility function won’t encourage betting on really big universes (as there is no increase in utility of the average lightcone from winning the bet), but it will encourage betting on really dense universes (packed full of people or simulations of people). So you should maybe bet that you are in a simulation, running on a form of dense “computronium” in the underlying universe.

• I’m not sure how you apply that in a big universe model… most of it is lies outside any given light-cone, so which one do you pick? Imagine you don’t yet know where you are: do you sum utility across all light-cones (a sum which could still diverge in a big universe) or take the utility of an average light cone. Also, how do you do the time-discounting if you don’t yet know when you are?

The possible universes I am considering already come packed into a future light cone (I don’t consider large universes directly). The probability of a universe is proportional to 2^{-its Kolmogorov complexity} so expected utility converges. Time-discounting is relative to the vertex of the light-cone.

...it will encourage betting on really dense universes (packed full of people or simulations of people).

Not really. Additive terms in the utility don’t “encourage” anything, multiplicative factors do.

• The possible universes I am considering already come packed into a future light cone (I don’t consider large universes directly).

I was a bit surprised by this… if your possible models only include one light-cone (essentially just the observable universe) then they don’t look too different from those of my stated hypothesis (at the start of the thread). What is your opinion then on other civilisations in the light-cone? How likely are these alternatives?

• No other civilisations exist or have existed in the light-cone apart from us.

• A few have existed apart from us, but none have expanded (yet)

• A few have existed, and a few have expanded, but we can’t see them (yet)

• Lots have existed, but none have expanded (very strong future filter)

• Lots have existed, and a few have expanded (still a strong future filter), but we can’t see the expanded ones (yet)

• Lots have existed, and lots have expanded, so the light-cone is full of expanded civilisations; we don’t see that, but that’s because we are in a zoo or simulation of some sort.

..it will encourage betting on really dense universes (packed full of people or simulations of people).

Not really. Additive terms in the utility don’t “encourage” anything, multiplicative factors do.

Here’s how it works. Imagine the “mugger” offers all observers a bet (e.g. at your 1000:1 on odds) on whether they believe they are in a simulation, within a dense “computronium” universe packed full of computers simulating observers. Suppose only a tiny fraction (less than 1 in a trillion) universe models are like that, and the observers all know this (so this is equivalent to a very heavily weighted coin landing against its weight). But still, by your proposed utility function, UDT observers should accept the bet, since in the freak universes where they win, huge numbers of observers win \$1 each, adding a colossal amount of total utility to the light-cone. Whereas in the more regular universes where they lose the bet, relatively fewer observers will lose \$1000 each. Hence accepting the bet creates more expected utility than rejecting it.

Another issue you might have concerns the time-discounting. Suppose 1 million observers live early on in the light-cone, and 1 trillion live late in the light-cone (and again all observers know this). The mugger approaches all observers before they know whether they are “early” or “late” and offers them a 50:50 bet on whether they are “early” rather than “late”. The observers all decide to accept the bet, knowing that 1 million will win and 1 trillion will lose: however the utility of the losers is heavily discounted, relative to the winners, so the total expected time-discounted utility is increased by accepting the bet.

• I was a bit surprised by this… if your possible models only include one light-cone (essentially just the observable universe) then they don’t look too different from those of my stated hypothesis (at the start of the thread).

My disagreement is that the anthropic reasoning you use is not a good argument for non-existence of large civilizations.

How likely are these alternatives? …

I am using a future light cone whereas your alternatives seem to be formulated in terms of a past light cone. Let me say that I think the probability to ever encounter another civilization is related to the ratio {asymptotic value of Hubble time} /​ {time since appearance of civilizations became possible}. I can’t find the numbers this second, but my feeling is such an occurrence is far from certain.

Here’s how it works...

Very good point! I think that if the “computronium universe” is not suppressed by some huge factor due to some sort of physical limit /​ great filter, then there is a significant probability such a universe arises from post-human civilization (e.g. due to FAI). All decisions with possible (even small) impact on the likelihood of and/​or the properties of this future get a huge utility boost. Therefore I think decisions with long term impact should be made as if we are not in a simulation whereas decisions which involve purely short term optimizations should be made as if we are in a simulation (although I find it hard to imagine such a decision in which it is important whether we are in a simulation).

Another issue you might have concerns the time-discounting...

The effective time discount function is of rather slow decay because the sum over universes includes time translated versions of the same universe. As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/​t. Nevertheless, for huge time differences your argument is correct. This is actually a good thing, since otherwise your decisions would be dominated by the Boltzmann brains appearing far after heat death.

• As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/​t.

It is about 1/​t x 1/​log t x 1/​log log t etc. for most values of t (taking base 2 logarithms). There are exceptions for very regular values of t.

Incidentally, I’ve been thinking about a similar weighting approach towards anthropic reasoning, and it seems to avoid a strong form of the Doomsday Argument (one where we bet heavily against our civilisation expanding). Imagine listing all the observers (or observer moments) in order of appearance since the Big Bang (use cosmological proper time). Then assign a prior probability 2^-K(n) to being the nth observer (or moment) in that sequence.

Now let’s test this distribution against my listed hypotheses above:

1. No other civilisations exist or have existed in the universe apart from us.

Fit to observations: Not too bad. After including the various log terms in 2^-K(n), the probability of me having an observer rank n between 60 billion and 120 billion (we don’t know it more precisely than that) seems to be about 1/​log (60 billion) x 1/​log (36) or roughly 1200.

Still, the hypothesis seems a bit dodgy. How could there be exactly one civilisation over such a large amount of space and time? Perhaps the evolution of intelligence is just extraordinarily unlikely, a rare fluke that only happened once. But then the fact that the “fluke” actually happened at all makes this hypothesis a poor fit. A better hypothesis is that the chance of intelligence evolving is high enough to ensure that it will appear many times in the universe: Earth-now is just the first time it has happened. If observer moments were weighted uniformly, we would rule that out (we’d be very unlikely to be first), but with the 2^-K(n) weighting, there is rather high probability of being a smaller n, and so being in the first civilisation. So this hypothesis does actually work. One drawback is that living 13.8 billion years after the Big Bang, and with only 5% of stars still to form, we may simply be too late to be the first among many. If there were going to be many civilisations, we’d expect a lot of them to have already arrived.

Predictions for Future of Humanity: No doomsday prediction at all; the probability of my n falling in the range 60-120 billion is the same sum over 2^-K(n) regardless of how many people arrive after me. This looks promising.

2. A few have existed apart from us, but none have expanded (yet)

Fit to observations: Pretty good e.g. if the average number of observers per civilisation is less than 1 trilllion. In this case, I can’t know what my n is (since I don’t know exactly how many civilisations existed before human beings, or how many observers they each had). What I can infer is that my relative rank within my own civilisation will look like it fell at random between 1 and the average population of a civilisation. If that average population is less than 1 trillion, there will be a probability of > 1 in 20 of seeing a relative rank like my current one.

Predictions for Future of Humanity: There must be a fairly low probability of expanding, since other civilisations before us didn’t expand. If there were 100 of them, our own estimated probability of expanding would be less than 0.01 and so on. But notice that we can’t infer anything in particular about whether our own civilisation will expand: if it does expand (against the odds) then there will be a very large number of observer moments after us, but these will fall further down the tail of the Kolmogorov distribution. The probability of my having a rank n where it is (at a number before the expansion) doesn’t change. So I shouldn’t bet against expansion at odds much different from 100:1.

3. A few have existed, and a few have expanded, but we can’t see them (yet)

Fit to observations: Poor. Since some civilisations have already expanded, my own n must be very high (e.g. up in the trillions of trillions). But then most values of n which are that high and near to my own rank will correspond to observers inside one of the expanded civilisations. Since I don’t know my own n, I can’t expect it to just happen to fall inside one of the small civilisations. My observations look very unlikely under this model.

Predictions for Future of Humanity: Similar to 2

4. Lots have existed, but none have expanded (very strong future filter)

Fit to observations: Mixed. It can be made to fit if the average number of observers per civilisation is less than 1 trilllion; this is for reasons simlar to 2. While that gives a reasonable degree of fit, the prior likelihood of such a strong filter seems low.

Predictions for Future of Humanity: Very pessimistic, because of the strong universal filter.

5. Lots have existed, and a few have expanded (still a strong future filter), but we can’t see the expanded ones (yet)

Fit to observations: Poor. Things could still fit if the average population of a civilisation is less than a trillion. But that requires that the small, unexpanded, civilisations massively outnumber the big, expanded ones: so much so that most of the population is in the small ones. This requires an extremely strong future filter. Again, the prior likelihood of this strength of filter seems very low.

Predictions for Future of Humanity: Extremely pessimistic, because of the strong universal filter.

6. Lots have existed, and lots have expanded, so the uinverse is full of expanded civilisations; we don’t see that, but that’s because we are in a zoo or simulation of some sort.

Fit to observations: Poor: even worse than in case 5. Most values of n close to my own (enormous) value of n will be in one of the expanded civilisations. The most likely case seems to be that I’m in a simulation; but still there is no reason at all to suppose the simulation would look like this.

Predictions for Future of Humanity: Uncertain. A significant risk is that someone switches our simulation off, before we get a chance to expand and consume unavailable amounts of simulation resources (e.g. by running our own simulations in turn). This switch-off risk is rather hard to estimate. Most simulations will eventually get switched off, but the Kolmogorov weighting may put us into one of the earlier simulations, one which is running when lots of resources are still available, and doesn’t get turned off for a long time.

• I am using a future light cone whereas your alternatives seem to be formulated in terms of a past light cone.

I was assuming that the “vertex” of your light cone is situated at or shortly after the Big Bang (e.g. maybe during the first few minutes of nucleosynthesis). In that case, the radius of the light cone “now” (at t = 13.8 billion years since Big Bang) is the same as the particle horizon “now” of the observable universe (roughly 45 billion light-years). So the light-cone so far (starting at Big Bang and running up to 13.8 billion years) will be bigger than Earth’s past light-cone (starting now and running back to the Big Bang) but not massively bigger.

This means that there might be a few expanded simulations who are outside our past light-cone (so we don’t see them now, but could run into them in the future). Still if there are lots of civilisations in your light cone, and only a few have expanded, that still implies a very strong future filter. So my main point remains: given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with (so not many to get filtered out; a modest filter does the trick).

The effective time discount function is of rather slow decay because the sum over universes includes time translated versions of the same universe. As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/​t.

Ahh… I was assuming you discounted faster than that, since you said the utilities converged. There is a problem with Kolmogorov discounting of t. Consider what happens at t = 3^^^3 years from now. This has Kolmogorov complexity K(t) much much less than log(3^^3) : in most models of computation K(t) will be a few thousand bits or less. But the width of the light-cone at t is around 3^^^3, so the utility at t is dominated by around 3^^^3 Boltzmann Brains, and the product U(t) 2^-K(t) is also going to be around 3^^^3. You’ll get similar large contributions at t = 4^^^^4 and so on; in short I believe your summed discounted utility is diverging (or in any case dominated by the Boltzmann Brains).

One way to fix this may be to discount each location in space and time (s,t) by 2^-K(s,t) and then let u(s,t) represent a utility density (say the average utility per Planck volume). Then sum over u(s,t).2^-K(s, t) for all values of (s,t) in the future light-cone. Provided the utility density is bounded (which seems reasonable), then the whole sum converges.

• I was assuming that the “vertex” of your light cone is situated at or shortly after the Big Bang (e.g. maybe during the first few minutes of nucleosynthesis).

No, it can be located absolutely anywhere. However you’re right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.

...given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with.

This looks correct, but it is different from your initial argument. In particular there’s no reason to believe MWI is wrong or anything like that.

...in short I believe your summed discounted utility is diverging (or in any case dominated by the Boltzmann Brains).

It is guaranteed to converge and seems to be pretty harsh on BBs either. Here is how it works. Every “universe” is an infinite sequence of bits encoding a future light cone. The weight of the sequence is 2^{-K-complexity}. More precisely I sum over all programs producing such sequences and give weight 2^{-length} to each. Since sum of 2^-{length} over all programs is 1 I get a well-defined probability measures. Each sequence gets assigned a utility by a computable function that looks like integral over space-time with temporal discount. The temporal discount here can be fast e.g. exponential. So the utility function is bounded and its expectation value converges. However the effective temporal discount is slow since for every universe, its sub-light-cones are also within the sum. Nevertheless its not so slow that BBs come ahead. If you put the vertex of the light cone at any given point (e.g. time 4^^^^4) there will be few BBs within the fast cutoff time and most far points are suppressed due to high K-complexity.

• No, it can be located absolutely anywhere. However you’re right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.

Ah, I see what you’re getting at. If the vertex is at the Big Bang, then the shortest programs basically simulate a history of the observable universe. Just start from a description of the laws of physics and some (low entropy) initial conditions, then read in random bits whenever there is an increase in entropy. (For technical reasons the programs will also need to simulate a slightly larger region just outside the light cone, to predict what will cross into it).

If the vertex lies elsewhere, the shortest programs will likely still simulate starting from the Big Bang, then “truncate” i.e. shift the vertex to a new point (s, t) and throw away anything outside the reduced light cone. So I suspect that this approach gives a weighting rather like 2^-K(s,t) for light-cones which are offset from the Big Bang. Probably most of the weight comes from programs which shift in t but not much in s.

The temporal discount here can be fast e.g. exponential.

That’s what I thought you meant originally: this would ensures that the utility in any given light-cone is bounded, and hence that the expected utility converges.

...given that a super-strong future filter looks very unlikely, most of the probability will be concentrated on models where there are only a few civilisations to start with.

This looks correct, but it is different from your initial argument. In particular there’s no reason to believe MWI is wrong or anything like that.

I disagree. If models like MWI and/​or eternal inflation are taken seriously, then they imply the existence of a huge number of civilisations (spread across multiple branches or multiple inflating regions), and a huge number of expanded civilisations (unless the chance of expansion is exactly zero). Observers should then predict that they will be in one of the expanded civilisations. (Or in UDT terms, they should take bets that they are in such a civilisation). Since our observations are not like that, this forces us into simulation conclusions (most people making our observations are in sims, so that’s how we should bet). The problem is still that there is a poor fit to observations: yes we could be in a sim, and it could look like this, but on the other hand it could look like more or less anything.

Incidentally, there are versions of inflation and many worlds which don’t run into that problem. You can always take a “local” view of inflation (see for instance these papers), and a “modal” interpretation of many worlds (see here). Combined, these views imply that all that actually exists is within one branch of a wave function constructed over one observable universe. These “cut-down” interpretations make either the same physical predictions as the “expansive” interpretations, or better predictions, so I can’t see any real reason to believe in the expansive versions.

• So I suspect that this approach gives a weighting rather like 2^-K(s,t) for light-cones which are offset from the Big Bang.

In some sense it does, but we must be wary of technicalities. In initial singularity models I’m not sure it makes sense to speak of “light cone with vertex in singularity” and it certainly doesn’t make sense to speak of a privileged point in space. In eternal inflation models there is no singularity so it might make space to speak of the “Big Bang” point in space-time, however it is slightly “fuzzy”.

I disagree. If models like MWI and/​or eternal inflation are taken seriously, then they imply the existence of a huge number of civilisations (spread across multiple branches or multiple inflating regions), and a huge number of expanded civilisations (unless the chance of expansion is exactly zero). Observers should then predict that they will be in one of the expanded civilisations. (Or in UDT terms, they should take bets that they are in such a civilisation). Since our observations are not like that, this forces us into simulation conclusions (most people making our observations are in sims, so that’s how we should bet).

I don’t think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization.

Incidentally, there are versions of inflation and many worlds which don’t run into that problem. You can always take a “local” view of inflation (see for instance these papers), and a “modal” interpretation of many worlds (see here). Combined, these views imply that all that actually exists is within one branch of a wave function constructed over one observable universe.

In UDT it doesn’t make sense to speak of what “actually exists”. Everything exists, you just assign different weights to different parts of “everything” when computing utility. The “U” in UDT is for “updateless” which means that you don’t update on being in a certain branch of the wavefunction to conclude other branches “don’t exist”, otherwise you lose in counterfactual mugging.

• I don’t think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization.

So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim? But on the off-chance that you are not a sim, you’re going to make decisions as if you were in the real world, because those decisions (when suitably generalized across all possible light-cones) have a huge utility impact. Is that right?

The problem I have is this only works if your utility function is very impartial (it is dominated by “pro bono universo” terms, rather than “what’s in it for me” or “what’s in it for us” terms). Imagine for instance that you work really hard to ensure a positive singularity, and succeed. You create a friendly AI, it starts spreading, and gathering huge amounts of computational resources… and then our simulation runs out of memory, crashes, and gets switched off. This doesn’t sound like it is a good idea “for us” does it?

This all seems to be part of a general problem with asking UDT to model selfish (or self-interested) preferences. Perhaps it can’t. In which case UDT might be a great decision theory for saints, but not for regular human beings. And so we might not want to program UDT into our AI in case that AI thinks it’s a good idea to risk crashing our simulation (and killing us all in the process).

In UDT it doesn’t make sense to speak of what “actually exists”. Everything exists, you just assign different weights to different parts of “everything” when computing utility.

I’ve remarked elsewhere that UDT works best against a background of modal realism, and that’s essentially what you’ve said here. But here’s something for you to ponder. What if modal realism is wrong? What if there is, in fact, evidence that it is wrong, because the world as we see it is not what we should expect to see if it was right? Isn’t it maybe a good idea to then—er—update on that evidence?

Or does a UDT agent have to stay dogmatically committed to modal realism in the face of whatever it sees? That doesn’t seem very rational does it?

• So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim?

It depends on the stakes of the best.

But on the off-chance that you are not a sim, you’re going to make decisions as if you were in the real world, because those decisions (when suitably generalized across all possible light-cones) have a huge utility impact. Is that right?

It’s not an “off-chance”. It is meaningless to speak of the “chance I am a sim”: some copies of me are sims, some copies of me are not sims.

This all seems to be part of a general problem with asking UDT to model selfish (or self-interested) preferences. Perhaps it can’t.

It surely can: just give more weight to humans of a very particular type (“you”).

What if modal realism is wrong? What if there is, in fact, evidence that it is wrong, because the world as we see it is not what we should expect to see if it was right?

Subjective expectations are meaningless in UDT. So there is no “what we should expect to see”.

Or does a UDT agent have to stay dogmatically committed to modal realism in the face of whatever it sees? That doesn’t seem very rational does it?

Does it have to stay dogmatically committed to Occam’s razor in the face of whatever it sees? If not, how would it arrive at a replacement without using Occam’s razor? There must be some axioms at the basis of any reasoning system.

• So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim?

It depends on the stakes of the best.

I thought we discussed an example earlier in the thread? The gambler pays \$1000 if not in a simulation; the bookmaker pays \$1 if the gambler is in a simulation. In terms of expected utility, it is better for “you” (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don’t contain simulations.

It is meaningless to speak of the “chance I am a sim”: some copies of me are sims, some copies of me are not sims

No it isn’t meaningless: chances simply become operationalised in terms of bets, or other decisions with variable payoff. The “chance you are a sim” becomes equal to the fraction of a util you are prepared to pay for a betting slip which pays out one util if you are a sim, and pays nothing otherwise. (Lots of linked copies of “you” take the gamble; some win, some lose.)

Incidentally, in terms of original modal realism (due to David Lewis), “you” are a concrete unique individual who inhabits exactly one world, but it is unknown which one. Other versions of “you” are your “counterparts”. It is usually not possible to group all your counterparts together and treat them as a single (distributed) being, YOU, because the counterpart relation is not an equivalence relation (it doesn’t partition possible people into neat equivalence classes). As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative “drift” along the chain, so that the ends are very different from each other (and not counterparts).

Subjective expectations are meaningless in UDT. So there is no “what we should expect to see”.

A subjective expectation is rather like a bet: it is a commitment of mental resource to modelling certain lines of future observations (and preparing decisions for such a case). If you spend most of your modelling resource on a scenario which doesn’t materialise, this is like losing the bet. So it is reasonable to talk about subjective expectations in UDT; just model them as bets.

Does it have to stay dogmatically committed to Occam’s razor in the face of whatever it sees? If not, how would it arrive at a replacement without using Occam’s razor?

Occam’s razor here is just a method for weighting hypotheses in the prior. It is only “dogmatic” if the prior assigns weights in such an unbalanced way that no amount of evidence will ever shift the weights. If your prior had truly massive weight (e.g, infinite weight) in favour of many worlds, then it will never shift, so that looks dogmatic. But to be honest, I rather doubt this. You weren’t born believing in the many worlds interpretation (or in modal realism) and if you are a normal human being you most likely regarded it as quite outlandish at some point. Then some line of evidence or reasoning caused you to shift your opinion (e.g. because it seemed simpler, or overall a better explanation for physical evidence). If it shifted one way, then considering other evidence could shift it back again.

• In terms of expected utility, it is better for “you” (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don’t contain simulations.

It is not the case if the money can be utilized in a manner with long term impact.

No it isn’t meaningless: chances simply become operationalised in terms of bets, or other decisions with variable payoff.

This doesn’t give an unambiguous recipe to compute probabilities since it depends on how the results of the bets are accumulated to influence utility. An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?

Incidentally, in terms of original modal realism (due to David Lewis), “you” are a concrete unique individual who inhabits exactly one world, but it is unknown which one. Other versions of “you” are your “counterparts”. It is usually not possible to group all your counterparts together and treat them as a single (distributed) being, YOU, because the counterpart relation is not an equivalence relation (it doesn’t partition possible people into neat equivalence classes). As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative “drift” along the chain, so that the ends are very different from each other (and not counterparts).

UDT doesn’t seem to work this way. In UDT, “you” are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more “conscious” than others. I don’t think UDT currently has an answer for this, but neither do other frameworks.

You weren’t born believing in the many worlds interpretation (or in modal realism) and if you are a normal human being you most likely regarded it as quite outlandish at some point. Then some line of evidence or reasoning caused you to shift your opinion (e.g. because it seemed simpler, or overall a better explanation for physical evidence). If it shifted one way, then considering other evidence could shift it back again.

If we think of knowledge as a layered pie, with lower layers corresponding to knowledge which is more “fundamental”, then somewhere near the bottom we have paradigms of reasoning such as Occam’s razor /​ Solomonoff induction and UDT. Below them lie “human reasoning axioms” which are something we cannot formalize due to our limited introspection ability. In fact the paradigms of reasoning are our current best efforts at formalizing this intuition. However, when we build an AI we need to use something formal, we cannot just transfer our reasoning axioms to it (at least I don’t know how to do it; meseems every way to do it would be “ingenuine” since it would be based on a formalism). So, for the AI, UDT (or whatever formalism we use) is the lowest layer. Maybe it’s a philosophical limitation of any AGI, but I doubt it can be overcome and I doubt it’s a good reason not to build an (F)AI.

• It is not the case if the money can be utilized in a manner with long term impact.

OK, I was using \$ here as a proxy for utils, but technically you’re right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don’t know how to bet in utils, use another proxy which is a consumptive good and can’t be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so perhaps a chocolate bar for the real “you” is worth many chocolate bars for sim “you”s? However we covered that earlier in the thread as well: my understanding is that your effective discount rate is not high enough to outweigh the huge numbers of sims.

An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?

Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does). And this style of reasoning may cover limit cases like a simulation running on a wafer which is then cut in two (think about whether the sims are independently hackable, and how much you care.)

• An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one?

Well this is your utility function, so you tell me! Imagine a hacker is able to get into the simulations and replace pleasant experiences by horrible torture. Does your utility function care twice as much if he hacks both simulations versus hacking just one of them? (My guess is that it does).

It wouldn’t be exactly twice but you’re more or less right. However, it has no direct relation to probability. To see this, imagine you’re a paperclip maximizer. In this case you don’t care about torture or anything of the sort: you only care about paperclips. So your utility function specifies a way of counting paperclips but no way of counting copies of you.

From another angle, imagine your two simulations are offered a bet. How should they count themselves? Obviously it depends on the rules of the bet: whether the payoff is handed out once or twice. Therefore, the counting is ambiguous.

What you’re trying to do is writing the utility function as a convex linear combination of utility functions associated with different copies of you. Once you accomplish that, the coefficients of the combination can be interpreted as probabilities. However, there is no such canonical decomposition.

• As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative “drift” along the chain, so that the ends are very different from each other (and not counterparts).

UDT doesn’t seem to work this way. In UDT, “you” are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more “conscious” than others. I don’t think UDT currently has an answer for this, but neither do other frameworks.

I think it works quite well with “you” as a concrete entity. Simply use the notion that “your” decisions are linked to those of your counterparts (and indeed, to other agents), such that if you decide in a certain way in given circumstances, your counterparts will decide that way as well. The linkage will be very tight for neighbours in the chain, but diminishing gradually with distance, and such that the ends of the chain are not linked at all. This—I think—addresses the problem of trying to identify what algorithm you are implementing, or partitioning possible people into those who are running “the same” algorithm.

• Actually I was speaking of a different problem, namely the philosophical problem of which abstract algorithms should be regarded as conscious (assuming the concept makes sense at all).

The identification of oneself’s algorithm is an introspective operation whose definition is not obvious for humans. For AIs the situation is clearer if we assume the AI has access to its own source code.

• Oh I see, that makes sense.

• I agree with this counterargument, but this thread being what it is, in which direction should I vote sub-comments?

• Irrationality Game: One can reliable and predictably make \$1M /​ year, and it’s not that difficult. (Confidence: 75%)

• What do you mean by “one”? Literally anyone at all? Anyone at least as smart as the average LWer? Something else?

• Let’s say someone with an engineering bachelor’s degree.

• Care to describe how?

• Irrationality game:

There are other ‘technological civilizations’ (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%

There are other ‘technological civilizations’ in our own galaxy: 75% with most of the probability mass in regimes where there are somewhere between dozens and thousands.

Conditional on these existing: Despite some being very old, they are limited by the hostile nature of the universe and the realities of practical manipulation of matter and energy to never controlling much matter outside the surfaces of life-bearing worlds, and either never leave their solar systems of origin with anything self-replicating or their replicators on average produce less than 1 seed to continue. 95%

Humanity has already received and recorded a radio signal from another thing-analogous-to-a-technological-civilization. This was either unnoticed or not unequivocally recognized as such due to some combination of very short duration, being a one-off event that was never repeated, being modulated in a way that the receiver was not looking for, or being indistinguishable from terrestrial radio noise. 20%.

Conditional on the above, the “Wow!” signal was such a signal. 20%.

• One statement per comment please.

• Probably overdid it with that one. Will split things up more in subsequent comments.

• There are other ‘technological civilizations’ (in the sense of intelligent living things that have learned to manipulate matter in a complicated way) in the observable universe: 99%

What do you mean by “complicated”? Were humans a technological civilization in 1900? In 1700? In 10,000 BC? In 50,000 BC?

• Even given other technological civilisations existing, putting “matter and energy manipulation tops out a little above our current cutting edge” at 5% is way off.

• Way off in which direction?

• There’s a lot you can do on the surface of a clement planet, and a lot you can do in a solar system without replicators that eat everything. Also depends on what you mean by ‘above’.

• Irrationality game:

Nice idea. This way I can safely test whether the Baseline of my opinion on LW topics is as contrarian as I think.

My proposition:

On The Simulation Argument I go for “(1) the human species is very likely to go extinct before reaching a “posthuman” stage” (80%)

Correspondingly on The Great Filter I go for failure to reach “9. Colonization explosion” (80%).

This is not because I think that humanity is going to self-annihilate soon (though this is a possibility).

• What is the extinction scenario you have in mind?

• Hopefully no extinction during the next many thousends of years. Which extinction afterwards is difficult to predict.

As I shortly argued in my baseline post I think that posthuman state is unlikely due to thermodynamics/​complexity constraints.

• Irrationality game: people are happier when living in traditional social structures, and value being part of their traditions[1]. The public existence of “weird” relationships (homosexuality, polyamory, BDSM, …) is actively harmful to most people; the open practice of them is a net negative for world utility. Morally good actions include condemnation and censorship of such things.

[1] Or rather what they believe are their traditions; these beliefs may not be particularly well-correlated with reality.

• Irrationality Game: Currently, understanding history or politics is a better avenue than studying AI or decision theory for dealing with existential risk. This is not because of the risk of total nuclear annihilation, but because of the possibility of political changes that result in setbacks to or an accelerated use and understanding of AI. 70%

• I’m 99% confident that dust specks in 3^^^3 eyes result in less lost utility than 50 years of torturing one person.

• Utility seems underspecified here.

• Y’all know this already, but just a reminder: preferences ain’t beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word “should” are are almost always imprecise: avoid them.

• I agree with you (maybe not 99% certainty though), and I’m surprised more people do not.That is, assuming the original stipulation of the dust specks causing only a “mild inconvenience” for everyone, and not some sort of bell curve of inconvenience with a mean of “mild”. People around here seem to grok the idea of the hedonic treadmill, so why don’t they apply that idea to this situation? Assuming all of those 3^^^3 people all truly only have a “mild inconvenience”, I would argue that from a subjective point of view from each individual, the utility of their day as a whole has not been diminished at all.

Actually, the more I think about it, the idea itself is poorly formed. It depends a lot on what sort of inconvenience the dust causes. If it causes 0.00000001% of the people to decide to shoot up a store or something, then I guess the one person being tortured would be better. But if the dust does not cause any sort of cascading effect, if it’s truly isolated to the lost utility of the dust itself, then I’d say the dust is better.

• Elsewhere I argued that the pain from the dust specks doesn’t add up (and is therefore not really comparable to one single person’s torture) unless the victims are forming a hive mind. What the thought experiment is actually comparing is one instance of horrible pain versus many, many individual and not groupable instances of minor discomfort.

• 12 Mar 2014 16:44 UTC
2 points

.

• If this train of thought continues along its natural course, you have to wonder why you are being “shown” the experience you have this moment as you read this, rather than some other more interesting or influential moment. Also, it is not clear that you would want to use this kind of anthropic reasoning to determine a policy; people that are not conscious but think they are would incorrectly draw the same conclusions and thus muck up the social commons with their undue senses of specialness.

ETA: There are a few other counterarguments similar to those in the previous paragraph. This has perturbed me for many years now, because the line of reasoning in the parent comment really does seem like the most intuitive approach to subjective anthropics. I’d be very satisfied to find a solution, but it seems equally likely that there’s just something pretty wrong with our intuitions about (relative) existence, which has implications for which kinds of decision theories we should be willing to put our weight on.

ETA2: And the UDT pragmatist in me wonders whether it even means anything for a hypothesis to be true, if it rationally shouldn’t affect your decisions. If anything I would lean toward decision theoretic epiphenomenalism implying falsehood.

• The part of this I disagree with most is putting the probability as low as 10%. I up voted, since that seemed like just putting a number to the word “significant”, and the other claims seem about right.

• Why are we reviving this at all?

• Just as a curiosity, this was the most downvoted comment in the original thread:

For a large majority of people who read this, learning a lot about how to interact with other human beings genuinely and in a way that inspires comfort and pleasure on both sides is of higher utility than learning a lot about either AI or IA. ~90%

(-44 points)

• This is a time that the system of hiding votes less than −3 is a bad thing. In this thread, downvotes indicate that a belief that people may have thought was rare is actually pretty common on LW, which is something I’m interested in seeing.

• You could do this with polls instead of karma. The advantage of karma is that it provides an incentive for people to play to win. The disadvantage is hiding comments.

• “provides an incentive for people to play to win”

You mean an incentive to hold irrational beliefs? Is that something we want to incentivize?

• No, not to hold irrational beliefs, but to admit to holding irrational beliefs.

• I agree. I want to comment on some of the downvoted posts, but I don’t want to pay the karma

• Great idea. I’ll put a note in the post so that if anyone ever resurrects this in the future they’ll do it that way.

• Should we down vote posts with many propositions if we agree with a majority? One? All? There are already two split clusters for me.

• Hmm. I’d recommend if the split has one that’s much stronger go with that vote, otherwise leave it at zero and explain in a comment.

• Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it’s all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Unfortunately, since the first irrationality game, the hiding code was changed so that this is no longer possible.

• Am I allowed to post about whether a counterfactual world would be “better” in some sense, if I specify something like “If Y had happened instead of X, the number of excess deaths from then till now would be lower /​ economic growth would have been better” ? I don’t know whether that falls under preferences disguised as beliefs.

• Perhaps you can try to turn it in a more generalized form?

• How do you mean?

• Irrationality game:

Most progress in medicine in the next 50 years won’t be due to advances in molecular biology and the production of drugs that are designed to target specific biochemical pathways but through other paradigms.

Probability: 75%

• Which other paradigms do you predict will become more relevant?

• I think that you can learn a lot via empirical measurement without needing to understand underlying biochemistry. Apart from direct measurement it also about developing better metrics for things like lung function, That partly why I invested significant effort into Quantified Self community building.

Exploring the phenomenological aspect of illnesses provides an area with a lot of untapped potential for knowledge gathering.

I think there are large returns found in studying human movement in detail. As it stands a method like Feldenkrais isn’t has some studies to support it but no strong scientifically investigated base. It should be possible to use cameras and computers to get accurate models of human movement and investigate an approach like Feldenkrais in deeper scientific way.

Relaxation is generally poorly understood. I think most patients could benefit from spending hours in floating tanks after having a major operation, yet few hospitals have floating tanks or maximize the relaxation in other ways. In general hospital beds and hospital food doesn’t seem to be optimized for health outcomes.

Having a well developed theory that can predict placebo effects accurately would be good. On the one hand it will make it easier to gather knowledge, on the other hand it will help doctors to frame their interactions in a way that helps patients.

Having good empathy training for doctors has potential to improve healthcare.

Psychological interventions like hypnosis.

Biofeedback.

That’s just a list of possibilities I can think of. There are probably things that are unknown unknowns for me.

• Those approaches risk turning therapy into stabs in the dark by neglecting the details of what is actually going on inside the black box.

• Those approaches risk turning therapy into stabs in the dark by neglecting the details of what is actually going on inside the black box.

Most people who pretend that they know what goes on inside the black box are wrong anyway.

Drug companies would like to predict which components work based on understanding biochemistry but they still have to run expensive trials in which over 90% of the components fail. Furthermore it becomes exponentially more expensive to discover additional drugs in that way.

That said, the idea of preaching blindness that currently haunts medical research is exactly about the virtue of stabing into the dark. If you would see what you are doing you wouldn’t be objective anymore.

• Irrationality game: The straightforward view of the nature of the universe is fundamentally flawed. 90%

By “fundamentally flawed”, I mean things like:

• I am currently dreaming.

• The singularity has already happened, and this world maximizes my CEV.

• I am a Boltzmann brain.

• I am in a simulation.

• Or some similar thing is true, but I haven’t thought of it.

• 16 Mar 2014 3:24 UTC
0 points

Irrationality game: The Great Stagnation is actually occurring, and it is mostly due to fossil fuel depletion rather than (say) leftist politics or dysgenics. (60%)

• Irrationality game: most opposition to wireheading comes from seeing it as weird and/​or counterintuitive in the same way that most non-LWers see cryonics/​immortalism as weird. Claiming to have multiple terminal values is an attempt to justify this aversion. 75%

• The Hellenistic astronomers (300BC-0) were generally heliocentric. 90%

• 16 Mar 2014 3:10 UTC
−2 points

Irrationality Game: We need a way to give feedback on irrationality game entries that the troll toll won’t mess with. (98%)

[pollid:643]

• Irrationality Game:

Everyone alive in developed nations today will die a fairly standard biological death by age:

150: 75%

250: 95%

(This latter figure accounts for the possibility that the stories of the odd Chinese monk living to age 200+ after only eating wild herbs from age 10 on up is actually true and not an exaggeration, or someone sticking to unreasonably-effective calorie restriction regimes religiously combined with some interesting metabolic rejiggering in the coming decade or two).

The majority (90+%) of people born in developed nations today will die a fairly standard biological death by age:

120: 85%

150: 99%

• I think the probability of a nuclear war or bio-engineered plague is higher than 5%.

• I think there’s a good chance that a nuclear war would kill less than 90% of the population, though.

• (But now that I think about it, even the ones it doesn’t kill straight away will be much less likely to live to 120 than they otherwise would.)

• Irrationality Game:

Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.

• Irrationality game: Humanity’s concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner’s Dilemma. 85% confident.

• Unsure what you mean by the ‘just’. Should it be more, and what is different about how we value morality based on its origin?

• There’s no other source of morality and there’s no other criterion to evaluate a behaviour’s moral worth by. (Theorised sources such as “God” or “innate human goodness” or “empathy” are incorrect; criteria like “the golden rule” or “the Kantian imperative” or “utility maximisation” are only correct to the extent that they mirror the game theory evaluation.)

Of course we claim to have other sources and we act according to those sources; the claim is that those moral-according-to-X behaviours are immoral.

what is different about how we value morality based on its origin?

Evolution, either genetic or cultural, doesn’t have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.

• Evolution, either genetic or cultural, doesn’t have infinite search capacity. We can evaluate which of our adaptations actually are promoting or enforcing symmetric cooperation in the IPD, and which are still climbing that hill, or are harmless extraneous adaptations generated by the search but not yet optimised away by selection pressures.

But we are our adaptations. Are you claiming morality should be defined by evolutionary fitness? (So we should tile the universe by our DNA?) How is that better than other external sources of morality? We already have a morality, it doesn’t matter (for the purpose of being moral) where it came from, be it God or evolution.

Also, saying the morality comes from solving PD doesn’t help, since PD already assumes the agents have utility functions. Game theory is only directly relevant to rationality, not morality. If you and I are playing a non-zero sum game then we better cooperate for our own good. But the fact that my utility function already includes your well-being is completely independent.

I agree that evolutionary thinking can be helpful to figure out what our morality is (since moral intuition is low bandwidth and noisy), but I’m against imaginary extrapolations of evolution.

• criteria like “the golden rule” or “the Kantian imperative” or “utility maximisation” are only correct to the extent that they mirror the game theory evaluation.

What makes the game theory evaluation correct?

• By “concept of morality”, do you mean moral intuitions or the output of ethical theories?

• Sorry, I was trying to get at ‘moral intuitions’ by saying fairness, justice, etc. In this view, ethical theories are basically attempts to fit a line to the collection moral intuitions—to try and come up with a parsimonious theory that would have produced these behaviours—and then the outputs are right or interesting only as far as they approximate game-theoretic-good actions or maxims.

• Humanity’s concept of morality (fairness, justice, etc) is just a collection of adaptations or adaptive behaviours that have grown out of game theory; specifically, out of trying to get to symmetrical cooperation in the iterated Prisoner’s Dilemma.

What do you mean “just”?

• Irrationality Game:

Non-healthcare related personal quality of life advances since around 1955-1965 (improvements in food, clothing, entertainment, transportation, communication, etc.) do not increase personal happiness due to the hedonic treadmill, and most of the advances have come about as a means to capture consumers’ median increase in real wealth due to a more efficient economy caused by technological advances. Furthermore, if a consumer hypothetically had access to the 1960 basket of consumer goods produced using 2014 technology, a consumer could live at or above the current middle-class level of happiness by spending about \$15,000 annually for an average household of three. ~85%.

Please note I am referring only to personal consumer goods and NOT business or technological improvements such as manufacturing techniques, computing power, automation, etc.

Can you elaborate on what this means. Do you mean that if we had more consumer goods of the year 1960, we would be as happy as the current middle class?

• If we could choose to buy a 1960-era consumer good (telephone, radio, house, car) that was manufactured using modern manufacturing techniques and modern technology, many of these goods would be significantly cheaper to produce than anything available on the current market, and buying these goods instead of modern goods would result in zero net loss of happiness for the consumer.

A radio, for instance, would look and act exactly like a 1960-era radio, but it could use digital technology, integrated circuits, etc. to make it work. The functionality and appearance of the goods are what remains the same as 1960.

• It’s plausible that a 1960s telephone could replace my parents’ landline phone. But if it replaced my smartphone, I think my happiness would decrease.

I have no opinion about the other things.

• Irrationality Game

No one who is presently cryopreserved, no matter when, how, or by whom, will ever live again. 90%.

• 13 Mar 2014 8:02 UTC
−7 points

Irrationality game:

Most animals we farm experience reality (including pain) in a human-enough-way that humanity’s CEV will be horrified we allowed it to go on as long as we did. 80%

The same applies to most wildlife (most animals’ lives have negative utility). 50%

(Note: I waffled on whether or not this was ruled out on the “no preferences disguised as beliefs” rule, but settled on “experience reality” as an empirical-enough question to be, eventually, objectively decided)

• Horrified we allowed wildlife to go on? What alternative do you propose?

• Heh. Given that the parent post said “most animals’ lives have negative utility” the alternative is obvious:

KILL THEM ALL!!!

:-D

• I think the OP meant “horrified we allowed [farming] to go on.”

• I upvoted, mostly because of how low the estimate of the second claim was. I’m a bit more confident than that. The other factor was the “human-enough-way” phrasing.

It was a little difficult to choose how to vote because you put two fairly distinct claims in one post.

• My understanding is that if your probability estimate is higher than the one given, you’re supposed to downvote.

• That is not my understanding.

Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

• I am 90% confident that when you say “exist” in a non-mathematical context, you don’t understand what you mean. This includes solipsist.

• Up voted, I believe that the universe is ultimately a complicated piece of mathematics. So when I say “exist” in a non-mathematical context, I mean the same thing as when I say it in a mathematical context.

• I don’t think “exist” means a single thing even in mathematics. For example, in second order logic I’d consider quantifying over elements in the domain of discourse to be different than quantifying over relations.

• I would still consider this to be a single thing, the same way that “P and Q” is still a statement.

Phrasing this in different way when I say “exist” I mean “either exist in the sense of quantifying over relations or elements”(definition subject to revision as I learn more non-first order logic).

• Agreed! Sorry to vote you down!

• Irrationality Game: Letting young people “wing-it” in the dating market results in much worse outcomes relative to more structured approaches. 70%

I just updated slightly in the downwards direction on this actually due to new research. But I still believe it.

• What are a few more structured approaches that could substantially improve matters? Some improvements can definitely be made, but I disagree that outcomes are much worse. Two studies suggest marriage markets are about 20% off the optimal match (Suen and Li (1999), “A direct test of the efficient marriage market hypothesis”, based on Hong Kong data, and Cao et al (2010), “Optimizing the marriage market”, based on Swiss data). While 20% is not trivial, it’s not a major failure.

If there are major improvements to be had, I expect it to come through individual attitudes and expectations, not overall structure. Does advice like “don’t become fixated on one person while you’re still young” count as more structure?

• Those are interesting papers. Thanks for the pointers. By structure yeah, I mean pretty much anything. Basically we need a secular replacement for church that provides kids with access to a variety of trusted adults so they have lots of advice to draw from.

Edit: I am confused. “We reallocate approximately 68% of individuals (7 out of 10) to a new couple that we posit has a higher likelihood of survival.”

• I’m surprised that you expected LW to disagree with you on this, it strikes me as the exact kind of weird-but-obvious belief that Lesswrong types love.

• If you think you have an idea for a structured approach that works better than the status quo for everyone involved, try testing your hypothesis and making it in to the next blockbuster dating website/​service.

• Irrationality game: Meticulously optimizing every minor feature of your life is not worth the added stress and worry.

• That seems to be phrased to be as easy to agree with as possible—almost anyone into life-optimization recognises that there’s a limit to how much is worth the effort, and will read your sentence as referring to the amount of life-optimization that they see as too much, and will agree.

Assuming you actually want to play to win in this game, you should identify a specific example or degree of life-optimization that you think isn’t worth it, that you expect most of LW to incorrectly think is actually worth it.

• A friend of mine used to work for a guy who insisted on enforcing an exact maximum number of mouse clicks to reply to an email. The guy had to send autoreplies to emails all day, and he was required to apply his boss’s exactly-these-clicks-here-and-here-and-not-one-more method.

That’s the level of optimization that I find counterproductive.

• Irrationality game:

Occultism /​ ritual magic /​ shamanism contains gems of real insight into the (material and non-supernatural) workings of the human nervous system and mind which have received less attention from mainstream psychology. These can be put to good use when not tied to counterproductive ideologies or assumptions of physical reality of mental experiences (though the latter is far less important). 75% EDIT: the 75% would be higher but for my uncertainty as to the degree of day to day usefulness for most of the population.

• I know you don’t intend “One time someone somewhere thought something right for the wrong reason. About psychology!”, but that’s sufficient for what you described.

Could you be a bit more precise about the gems-to-idealogies ratio? Is it one gem every couple of ideologies, a few gems per ideology, …? Are the same gems repeated a lot, or do you get new ones in each ideology?

• A number of gems are repeated over and over again. These include:

*The power of the mind over biology, AKA the placebo effect. This is what happens when what started out 500 megayears ago as a glorified thermostat gains sentience. When the same organ system that does the thinking also controls body temperature and blood pressure and even the base level of inflammation throughout the body (some really cool research I saw here at the university in the last year or two), what goes on in the mind has a profound direct effect on health and well-being. I can almost make myself pass out from low blood pressure at a thought (having teased apart my old psychosomatic reaction to flu shots), and when weaponized the placebo effect can kill. It can also be weaponized for good, and many of these traditions essentially do so to some degree for their practitioners.

*The power of symbols to change the world-as-it-is-experienced-or-valued without changing the physical world, or to produce anchor-points for behavioral change, or to redirect basal urges into arbitrary directions.

*The ability of a human to contain multiple agents with their own agendas, or to instantiate a ‘local copy’ of an abstract agent represented in a culture.

In my cursory poking around, there seem to be a lot of variations on these themes, perhaps along with a whole bunch of practical ‘this works for this purpose’ things. As someone rather concrete-minded and very busy I’m not exactly the best person to get in for first hand experience unfortunately.

Also, to clarify, I didn’t mean to say that all of these occult/​magical traditions were tied to counterproductive ideologies. LaVeyan satanism or Christian science seem pretty counterproductive due to poisonous philosophy and rejection of useful practices respectively, while modern druidry for example seems nearly completely benign (and incidentally even admits that one of its gods was made up by a university class project, and doesn’t care because he still works and shows up when invoked in ritual). I also don’t think that the visualization of, or acting in response to physically unreal things is necessarily counterproductive when the intended effect is upon human behavior or thought or nervous-system-state rather than the external physical world.

• Thanks for the examples.