I am by no means expert in this. My theory is effective writing in general is a way to signal one’s intelligence in most medieval societies. This is especially so if one can write and read in a form of ancient text. But in Western Europe this was achieved by directly using a old language—Latin. Proficiency in a different language by itself is enough to be an indicator of intelligence. However the Chinese to an extent have been using the same language (or at least writing) for the entire history. An example would be for a typical grade 8 Chinese language textbook would include many old passages some of which was written 18 centuries ago. Being able to write plainly in an everyday language is not something hard. So the Chinese scholars have a greater urge to show their status by using poetic and archaic expressions. Very often at the expense of clarity.
dadadarren
Ahh, the famous Lun Yu. It is full of such expressions that direct translation gives you a headache. To me the most famous example would be “民可使由之不可使知之”. Due to the lack of punctuation it can be translated in two different ways:
1: 民可使由之,不可使知之:common people shall be commanded, (but) not enlightened.
2: 民可,使由之。不可,使知之。:(if) common people are well educated let them act on their own. If not, enlighten them. Pretty drastically different political ideal here.
Essentially what Gunnar_Zarncke said.
Assuming the objective is to maximize my money, there is no good strategy. You can make the decision as you described, but how do you justify it being the correct decision? I either get the money or not as I am either L or not. But there is no explanation as to why. The decimal numbers never appeared for just me.
The value calculated is meaningful if applied to all copies. The decimal numbers are the relative fractions. It is correct to say if every copy makes decisions this way then they will have more money combined. But there is no first-person in this. Why would this decision also be the best for me specifically? There is no reason. Unless we make an additional assumption such as “I am a random sample from these copies.”
It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive.
Personally, the concept of “useful update” seems situational. For example, say someone has a prior that leads him to conclude the optimal strategy is not to play the Chinese Roulette. However, he was forced to play several rounds regardless of what he thought. After surviving those rounds (say EEEEE), it might very well be that he updates his probability enough to change his strategy from no-play to play. That would be a useful update. And this “forced-to-play” kind of situation is quite relevant to existential risks, which anthropic discussions tend to focus on.
Basing on the reply I am not very certain of your exact position. I kind of suspect it is implying the multiverse response to fine-tuning. It suggests the reason for observed fine-tuning is because there are many universes in total, and only in the ones compatible with life can give rises to observers pondering upon the parameters. Therefore finding ourselves in a universe compatible with life is not a surprise. I.e. It is not statistically incredible because of the huge number of universes out there.
I have to say that answer is very problematic. It treats “I” or “us” as the outcome of a sampling process subjected to survivorship bias, which interprets the WAP as an Observer Selection Effect (OSE). This conceptual selection has to be done from a god’s eye perspective. It makes the same mistake as the fine-tuning argument by mixing first-person reasoning with objective reasoning.
In my opinion, this actually justifies the fine-tuning argument. Furthermore, it hijacks the anthropic rebuttal (which should be a simple tautology based on consistent perspective thinking). It also leaves the door open for rebuttals such as Leslie’s firing squad and the fine-tuned multiverse.
In a sense, the fine-tuning argument is still an ongoing debate because currently, anthropic reasoning is inadequate. It is filled with paradoxes and controversies. All popular assumptions (SSA, SIA) treats indexicals as the outcome of some sampling process, implying the OSE. My Perspective-Based Argument (PBA) is an attempt to change that.
Interesting article. I dare not say I understand it fully. But to argue for some categories as more or less wrong than others is it fair to say you are arguing against the ugly duckling theorem?
Exactly this. The problem with the current anthropic schools of thought is using this view-from-nowhere while simultaneously using the concept of “self” as a meaningful way of specifying a particular observer. It effectively jumps back and forth between the god’s eye and first-person views with arbitrary assumptions to facilitate such transitions (e.g. treating the self as the random sample of a certain process carried out from the god’s eye view). Treating the self as a given starting point and then reasoning about the world would be the way to dispel anthropic controversies.
This is a hard concept to grasp. But if my understanding is correct, I think you have described a legitimate paradox, especially for physicalism. If everything is physical and nothing beyond, and physics can be explained by math (in terms of values of fundamental constants and various laws), then how come only one particular set of values are physical (“real”), while others are not. There seems to be a missing deciding factor not explained by math or physics.
An obvious way out is of course to say “all mathematical possible universes ARE real. Physics is only trying to determine which particular universe we live in.” Then the problem becomes how to define WE. Again, this to me appears an impossible task for physics. Imagine a complete physical description of dadadarren, it does not seem to cover the fact that he is me, or I am experiencing the world from that physical system’s perspective.
FWIW, I will take a swing at this paradox.
We seem to know that there is a reality that exists. This is undeniable. But how do I know or believe there is a reality out there? Only from the interactions between me and the environment. Those interactions ultimately lead to various subjective experiences directly felt which form my belief in a “real world”.
(Conversely, if I question whether my experiences truly reflect what’s out there, then I question reality. Like brain in a vat or similar skeptical arguments)
It seems to be the case that this reality is perfectly mathematically describable. Also undeniable. All interactions from the environment seem to be predictable/explainable using math (subject to inherent indeterminacies and computing power): If I let go of a ball, it would drop. If I look at the window I can see what’s behind the glass as they are transparent. If I measure a spin of the election there is a certain probability for the outcome etc.
However, if physics is the mathematics that explains those interactions, then it cannot describe everything in the universe. Most importantly its scope does not include me. But because I believe in reality and that you are real, I can imagine thinking from your perspective too. And it would be the same. Physics can explain the environment’s action upon you but not you. However, now I am in the scope of physics from your perspective. And it doesn’t have to be applied from a human being’s viewpoint, any physical system’s perspective is just as valid.
In this sense, the objectivity of physics does not mean it describes the entire universe with a “view from nowhere”. But rather, those mathematical equations remain useful from a wide range of perspectives. Remember, whoever/whatever at the perspective’s center is not described by physics. (IMO that is the domain of subjective experience and consciousness)
It seems that whether a mathematical universe exists/is real cannot be a mathematical property of that mathematical universe. I agree with this too.
My solution of the “extra ingredient” which determines what is real versus what’s merely mathematical is the first-person perspective. It is not something explainable by math or logic. It is based on subjective experience. There are other things in other mathematical universes that can “think” (using the term liberally). But only this body’s subjective experience and consciousness are felt. So who’s the first person is primitively clear. And by using expressions such as “the others” it is already clear which universe is “the real one”.
In my opinion, the indexical “I” has an intrinsically clear meaning: the first-person. There is no point in further deconstructing it. Each of us inherently knows which person “I” is because the subjective experience is due to it, the rest of the world is perceived by interaction with it. As a concept, it cannot be further reduced or explained by logic or reasoning. I’m ok with you treating it as a thought process, but it doesn’t have to be about anthropics. It is something far simpler and basic.
Anthropic problems lead to paradoxes because they are formulated using specific first-person perspectives. This allows the problem to be set up without using any particular physical person/time.
In the original sleeping beauty problem, there could be 2 awakenings. But the question doesn’t specify which physical one is being considered. By saying “as Beauty wakes up in the experiment”, the awakening is specified by Beauty’s first-person perspective (the one happening “now”). So the problem is really only understandable from this perspective. And to comprehend the question we have to imagine being Beauty in that scenario. Thus it could only be answered from this first-person perspective.
In your modified version, the two awakenings are distinguishable. Now we can understand it by imagining being Beauty in her situation. We can also comprehend which day is being questioned by taking an outsider view: “the day Beauty is able to think about anthropics”. So it can be solved either way, from beauty’s first-person OR from a god’s eye view. That is actually the case for most probability problems. Which is why people want to solve anthropic problems from a god’s eye view too. Because it worked so well before.
For the original sleeping beauty problem, solving it from a gods’ eye view is impossible because the question uses the first-person perspective like “now” or “I”. So additional assumption is needed to change them into something meaningful from the god’s eye view. That is where SSA or SIA kicks in: they change the first-person “I” or “now” into a random sample, to be incorporated into god’s eye view reasoning.
Because in your modified version the awakenings are distinguishable thus can be perfectly understood from an outsider’s view, SSA or SIA is not needed to answer it. “Beauty wakes up on the day she’s able to think about anthropics, aka Monday” is no new information and the probability stays put at 1⁄2. So unless we want to treat every problem as an anthropic problem, there is really no need to bring up SSA or SIA here.
Turning with the probability of 2⁄3 is not a self-locating probability. It is a valid decision. What is not valid is when at an intersection ask “what is the probability that here is X?”, this is a self-locating probability. It needs to employ the first-person perspective to make sense of “here”, while also needs a god’s eye view to treat the location as unknown. i.e. mixing two perspectives. We can’t assign a value to it then make a decision basing on that.
If we consider perspective as fundamental in reasoning then physics cannot be regarded as the description of an objective reality, rather it is the description of how the world interacts with a perspective center. So physics not describing the observer itself is to be expected. Yet free will (and subjective experience in general) are only relevant to the self. So physics cannot be against free will as it is not something within its domain of study.
That is all assuming perspective is the fundamental form of reasoning. If we consider objective reasoning as fundamental, then physics as the description of the objective reality is the foundation of any perspective experiences such as free will. And it would be right to say free will is not compatible with physics.
The former considers reasoning as the first-person as the foundation, the other considers reasoning objectively as the foundation.
I see it as less of “Humans are more reliable than AI” but more of “Humans and AI do not tend to make the same kind of mistakes”. And the everyday jobs we encounter have been designed/evolved around human mistake patterns so we are less likely to cause catastrophic failures. Keeping the job constant and replacing humans with AI would obviously lead to problems.
For well-defined simple jobs like arithmetic, AI has a definite accuracy edge compared to human beings. Even for complex jobs, I am still unsure if human beings have the reliability edge. We had quite a few human errors leading to catastrophic failures in earlier stages of complex projects like space explorations, nuclear power plants, etc. But as time passes on we rectify the job so human errors are less likely to cause failure.
The “I” is primitively defined by the first-person perspective. After waking up from the experiment, you can naturally tell this person is “I”.It doesn’t matter if there exists another copy physically similar to you. You are not experiencing the world from their perspective.
You can repeat the experiment many times and count your first-person experience. That is the frequentist model.
I will reply here. Because it is needed to answer the machine experiment you laid out below.
The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions.
For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)
The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase, the relative fractions would approach 1⁄2 for me. So there is no problem saying the probability is half.
As long as we don’t have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1⁄2 for post-fission first-person.
For coin tosses, it could get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can’t even say that is something with a freakishly small chance. It’s just who I am. What does “It is a small chance event for me to be LLLLLL..” even mean? Some additional assumption explaining the first-person perspective is required.
That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes.
So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press “I am L”, because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the “I am L” button will give rewards 1⁄2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear.
Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.
Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So “I am L”, and “I am LR” to your problems. If you think that is wrong I would like to know why.
Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That’s why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability.
I think I kind of getting where our disagreement lies. You agree with the “all choices are illusions”. By this, there is no point in thinking about “how should I decide”. We can discuss what kind of decision-maker would benefit most in this situation, which is the “outsider perspective”. Obviously, one-boxing decision-makers are going to be better off.
The controversy is if we reason as the first-person when facing the two boxes. Regardless of the content of the opaque box, two-boxing should give me 1000 dollars more. The causal analysis is quite straightforward. This seems to be a contradiction with the first paragraph.
What I am suggesting is the two reasoning are parallel to each other. They are based on different premises. The “god’s eye view” treats the decision-maker as an ordinary part of the environment like a machine. Whereas the first-person analysis treats the self as something unique: a primitively identified irreducible perspective center, i.e. THE agent—as opposed to part of the environment. (Similar to how a dualist agent consider itself) Here free will is a premise. I think they are both correct, yet because they are based on different perspectives (thus different premises) they cannot be mixed together. (Kind of like deductions from different axiomatic systems cannot be mixed.) So from a first-person perspective, I cannot put how Omega has analyzed me (like a machine) thus filled the box into consideration. For the same reason, from a god’s eye view, we cannot imagine being the decision-maker himself when facing the two boxes and choose.
If I understand correctly, what you have in mind is that those two approaches must be put together to arrive at a complete solution. Then the conflict must be resolved somehow. It is done by letting the god’s eye view dominate over the first-person approach. This makes sense because after all treating oneself as special does not seem objective. Yet that would deny free will which could make all casual decision-making processes into question. Also, this brings to a metaphysical debate of which is more fundamental? Reasoning from a first-person perspective or reasoning objectively?
I bring up anthropics because I think this is the exact same reason which leads to the paradoxes in that field, mixing reasoning from different perspectives. If you do not agree with treating perspectives as premises and keeping two approaches separate then there is indeed little connection between that and Newcomb’s paradox.
Instead of examining how SIA behaves given multiverse theories are true, a better approach is to examine what SIA says about the validity of multiverse theories.
And the result is simple, SIA heavily favours multiverse theories, as they greatly inflate the total number of observers in existence. It does not matter what kind of multiverse theories they are. It could be a very-very large universe (thus many casually independent regions), it could be a plethora of universes with different physical parameters, it could be the many-worlds interpretations of quantum mechanics, it could also be the simulation argument where a super majority of observers are computer-generated.
In my experience, most people are unwilling to bite this bullet and just say those theories are true simply because I exist. Two common ways I have seen people attempting to save SIA. 1: play with the reference class by arguing the reference class should not include observers from other universes. e.g. “I could not have been an observer from another universe.” or “I reject the assumption that I could be a computer programme completetly”. 2: play with infinity. e.g. “It is difficult to apply probabilistic judgments when infinity is part of the problem, and many multiverse theories imply infinity.” But neither is very convincing.
The answer is simple yet unsatisfying. In those situations, assuming the objective is simple self-interest, there is no rational choice to be made.
If we assume the objective is the combined interest of a proposed reference class, and we further assume every single agent in the reference class follows the same decision theory, then there would be a rational choice. However, that does not correspond to the self-locating probability. It corresponds to a probability that can be consistently formulated from the god’s eye view. E.g. the probability that a randomly chosen observer is simulated rather than the probability that “I” am simulated. Those two are distinctly different unless we mix the perspectives and accept some kind of anthropic assumption such as SSA or SIA.
The information between the two in the meeting are not exactly the same. From the A person’s perspective this is a meeting between a specific A, I, meeting an unspecific B. From the B person’s perspective this is a meeting between a specific B, again I, meeting an unspecific A. This importance of specification can be checked by changing the particular individual in the meeting to a different one and see if that affects the reasoning. For example from that A’s perspective if this particular A, I, did not meet a B then his reasoning would be entirely different. It doesn’t matter if another A have met with a B. Yet if he did not meet this particular B, but some other B instead, his reasoning would still be the same. This difference of specification is entirely due to their different perspectives. It is incommunicable.
Consider this experiment. An alien has abducted you and one of your friends. You are put to sleep. The alien then tosses a fair coin. If it lands on heads it won’t do anything to you. If it lands on tails it will clone you and put the clone into another identical room. The clone process is highly accurate so that the memory is retained. As a result the clone, as well as the original, can not tell if he is old or new. Meanwhile your friend never goes though any cloning process. After waking you up the alien let your friend choose one of the two rooms to enter. Suppose your friend has chosen your room. As a result you guys meet each other inside. How should you reason about the probability of the coin toss? How should your friend reason it?
For my friend the question is non-anthropic thus very simple. If the coin landed heads then 1 out of the 2 rooms would be empty. If the coin landed tails then both rooms would be occupied. Because the room that she randomly chose is occupied she now has new evidence favouring tails. As a result the probability of heads can be calculate by a simply bayesian update to be 1⁄3.
For halfers the question is not too complicated either. After waking up I have no new evidence about the fair coin toss. So I ought to believe the probability of heads is 1⁄2. Because my friend is randomly choosing between two rooms, regardless of the coin toss result the probability of my room being chosen is always half. Therefore seeing my friend gives me no new information about the coin toss either. This means I should keep believing that the probability of heads to be 1⁄2.
Here the disagreement is apparent. Even though the two of us appear to have the same information about the coin toss we assign different probability to the same proposition. To make the matter more interesting nothing I could say would change her mind and vice versa. We can communicate however we like but nobody is going to revise their answer. This may seem strange but it is completely justified. The cause of this disagreement is our different interpretations of who is exactly in this meeting. Remember according to my friends’ reasoning the evidence that causes the probability update is “the chosen room is occupied.” The occupant, in case there are duplicates, could not be specified from her perspective. In other words, as long as there is someone in the room she will reason as such. This is expected since the cloning procedure is highly accurate so there is no objective feature relevant to the coin toss to differentiate each duplicate. However from my first-person perspective I can inherently specify the one whom she is in meeting with. It is me, myself. But this specification is only valid from first-perspective. First-person identity is based on the immediacy to perception, which is primitive. It is incommunicable. I can keep telling her “this is me” and it would not mean anything to her. As a result the two of us would keep our own answers and remain in disagreement
This disagreement is also valid with a frequentist interpretation, which in my opinion is also easier to understand. The experiment can be repeated many times and the relative frequency can be used to show the correct probability. From my perspective repeating the experiment simply involves me going back to sleep, and wake up again after a coin toss and the potential cloning process. Of course after waking up I may not be the same physical human being just as the case of the first experiment. But this does not matter because in first-person perspective I am defined primitively base on subjective identity instead of objective features or qualities. So I would always regard the one falling asleep on the previous night as part of my subjective persistent self since it was the center of my perspective To make the procedure easier suppose I can check the previous coin toss result before going back to sleep again. So each iteration can be summarize as to go to sleep, wake up, and check the coin. Imagine repeating this iteration 1000 times by my count. I would have experienced about 500 heads and tails each. Furthermore if my friend is involved then I would see her about 500 times with about equal number of occurrences after heads or tails. However for these 1000 coin tosses my friend would see an occupied room about 750 times. The extra 250 times would be due to seeing the other duplicate instead of me after tails. It is easy to see our relative frequency of heads with a meeting are indeed different, half for me, a third for her. Of course my friend should be involved in far more repetitions than I do since every duplication of me are indifferent from her perspective so she shall be involved with their repetitions as well. However her relative frequency would remain unchanged by the higher number of iterations.
The disagreement arises because for my friend meeting someone in the room is technically not the same event for me meeting her. This is quite clearly so once the experiment is repeated a large number of times as discussed above. Her seeing someone in the room contains more experiments than me meeting her (750 vs 500). So we are actually assigning probability to different events. In my opinion this means it does not technically violate the theorem. Even though it may seems so superficially.
One thing that should be noted is that while Adam’s argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1⁄2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam’s argument specifically is not very effective.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1⁄2.
Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can’t tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...
Counting subjective experience does not necessarily lead to Thirderism.
As a Chinese I want to contribute some thought into this topic.
One thing I want to mention is the difference in language. Classical Chinese is a language extremely difficult to master. It literally take decades of effort to be able to write a decent piece. It is hard not because of complicated grammar or complex sentence structure. But because it focus on poetic expressions and scholarly idioms. This language is very enjoyable to read and relatable when used in expressing emotions and ideas. However it is quite cumbersome in expressing precise logic and definitions. Yet at least before the new cultural movement in 1916 it is generally regarded that anything worth put into writing should be done in Classical Chinese. This severely limits the participation of the general populace. Even if someone is trained enough to put down scientific related topics in Classical Chinese it is unlikely to be regarded as a masterful piece and gather much audience. Just like if a poorly written piece is posted in lesswrong we are more likely to skip it regardless of the content it is expressing.