Must everyone begin as not trying to be rational? I probably did too then, but I don’t remember it. Trying to be correct by making your thought processes accurate seems like a pretty obvious thing to do (I assume that’s what’s meant by rational). I’ve rarely been so shocked as when I realized (at about 12 I think) that it’s normal and not embarrassing in society to have opinions for ‘arbitrary’ reasons. I’m still kind of puzzled about what else you would think you were doing, even if you are delusional about your success. What did you folk transition here from?
KatjaGrace
“Philosophy triumphs easily over past and future evils; but present evils triumph over it.”
-- Francois de La Rochefoucauld
Observing that we nobly analyse distant things, and in the present do whatever the hell we want.
If they are false they are small violations of truth and thus inconsequential.
Making accurate significant claims in comments on obscure blogs isn’t often consequential.
It could be of course that you are limited by other features of your psychology, and fail to notice because noticing such things doesn’t lead to useful or sexy behavior. Such discriminating failure to notice things can’t be mere random stupidity.
This post is an argument against voting on your updated probability when there is a selection effect such as this. It applies to any evidence (marbles, existence etc), but only in a specific situation, so has little to do with SIA, which is about whether you update on your own existence to begin with in any situation. Do you have arguments against that?
Are you saying this problem arises in all situations where multiple beings in multiple hypotheses make the same observations? That would suggest we can’t update on evidence most of the time. I think I must be misunderstanding you. Subjectively indistinguishable beings arise in virtually all probabilistic reasoning. If there were only one hypothesis with one creature like you, then all would be certain.
The only interesting problem in anthropics I know of is whether to update on your own existence or not. I haven’t heard a good argument for not (though I still have a few promising papers to read), so I am very interested if you have one. Will ‘exploring their relationships’ include this?
Don’t worry, I meant to imply no such thing! Observing others only :)
If you are in a universe SIA tells you it is most likely the most populated one.
SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).
SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.
You could have been one of those who didn’t learn the rules, you just wouldn’t have found out about it. Why doesn’t the fact that this didn’t happen tell you anything?
We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.
Intuitive SIA = consider yourself a random sample out of all possible people
SSA = consider yourself a random sample from people in each given universe separately
e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you
What settlers have really seen also depends on whether your prior is correct in our world of real islands (implicitly 1⁄4 to each possibility). But you can easily model what people would see in a world with that distribution of islands:
e.g. Let an easy to get to island have 100 visitors and a hard to get to one 10 visitors. Let visitors to disaster prone islands die before the next people arrive. Let there be ten of each type of island
Total observers:
From each island type 1: 1 arrives to an uninhabited island, 99 to an inhabited one
Island type 2: 1 arrives to an uninhabited island, 9 to an inhabited one
Island type 3: 100 arrive to uninhabited island
Island type 4: 10 arrive to an uninhabited island
Of the 1120 observers arriving at uninhabited islands (ten of each of the above islands), 1000 will be on type 3 islands, consistent with SIA. In this case the alternative anthropic principle preferred by others, the self sampling assumption (SSA), can come to the same conclusion because the possible islands you are considering are actual. If real islanders have seen something other than this, it is because there were different frequencies of islands.
Determining whether there is a filter is a separate issue to updating on the size of our ‘reference class’ in given scenarios. All that is needed for my argument is that there is apparently a filter at the moment.
You are correct that civilizations who know they are in the future or the past aren’t added to our reference class for SIA purposes, but it looks to me like this makes no difference to the shift if the proportions of people in late filter and early filter worlds are the same across time, which I am assuming in a simple model, though you could of course complicate that.
″ Indeed, SIA, once we update on the present, cannot tell us anything about the future.”
For my argument it only need tell us about the present and the past. They can inform us on the future in the usual way (if we can work out where the filter has been in the past, chances are it hasn’t just moved, which has implications for our future).
Given any particular world size, SIA means the filter is more likely to be late. Larger worlds with early filters can of course be made just as likely as smaller worlds with late filters, so if you double the size of the early filter worlds you look at, SIA makes no difference. If you were to include the one planet early filter world and the four planet late filter world in your original set, the usual shift toward late filter worlds would occur.
This doesn’t seem a trick specific to SIA—you can do the same thing to avoid updating on many things. e.g. consider the following non anthropic example:
There are two urns. A has odd and even numbered balls in it, and B just has odd numbered balls. An urn will be chosen and some unknown number of balls pulled from it. You will be told afterwards whether number 17 comes up or not.
Number 17 did come up. Does this increase your posterior on the urn being B? Intuitively yes—around twice as many odd balls would have been drawn from the odd ball urn than the mixed one, giving twice as many opportunities for 17 to come up. But now consider these options:
X) Two balls drawn from even ball urn
Y) Four balls drawn from even ball urn
A) One ball drawn from odd ball urn
B) Two balls drawn from odd ball urn
With the same prior as in your example, you get the same results.
Conclusion: I don’t think any of this makes much difference to the original argument.
fixed I think
“Should you be seeking to gain in that particular moment?...Hell no! Right? Because you don’t want to take a risk of falling or getting into a spot where the tiger can jump up and get you or anything like that. Your brain wants you to sit tight, stay put, shut up, don’t rock the boat… until the crisis is over. It wants you to sit tight. That’s the “pain brain”.”
Yet procrastination mostly consists of finding more fun things to do.
http://itunes.apple.com/us/app/track-share-daily-life-tracker/id326385763?mt=8 tracks anything you want on an iphone
To say that you will achieve anything worth achieving is to arrogantly imply that you believe yourself better than those around you who accept their exempting limitations. Thus it is necessary to say that you are ‘trying’, as a clear message that you understand that you won’t actually succeed. The danger is of then forgetting not to merely ‘try’.