My position has been the same. It starts from one assumption: treat perspectives as fundamental axioms. Then reasoning from different perspectives would not be mixed, indexicals are perspective-based thus primitively identified. So we would not treat I or now as the result of some imaginary random sampling, which has lead to the anthropic debate of SSA and SIA. It has been laid out in the first-post you replied to.
You say I do not respond to issues you raised. It appears I simply cannot do so. Because every time I choose to provide a detailed explanation I got comments such as “but you are not arguing, you are just asserting” or “that is not a substantive argument”, or “I deny that what you say”. You just give judgments without getting into where you think is wrong. And when you do try to get into the reasoning, you didn’t even care what I wrote, like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them. If you do not understand something I said, make me clarify. I would like to answer them because it helps to explain my position and outline our disagreement. (Btw, the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.) But by keep dismissing my effort like above it just seems you are not interested in that. You just want to argue your position is the better one.
Regarding the MWI post. If MWI does not require perspective-independent reality. Then what is the universal wave function describing? Sure, we generally accept this objective reality. And if an interpretation suggests otherwise it is usually deemed its burden to provide such a metaphysics (e.g. like participatory realism by QBism). Sean Carrol regards this as a reason to favor or default to the MWI. I argued if Thomas Nagel’s 3 Steps of how we get the idea of objectivity is correct, then perspective-independent objectivity itself is an assumption. The response I get is that you deny my argument. That’s it. What can I possibly say after that? You want me to understand your model of MWI. But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
You say I am not pointing to SIA supporters having different opinions as you. Because you said you don’t care. And I find it hard to believe when you say you do not know any SIA supporters disagree with your position. For starters, Katja, who brought up SIA doomsday actually argued SIA is preferable to SSA due to perspective disagreements. Michael Titelbaum, a thirder, who give many strong arguments against halfers, listed naive confirmation of MWI as a problem. Your position that SIA is the “natural choice” and paradox free is a very strong claim. ( If you are that confident, maybe make a post about it?) Regarding your framework of solving the paradoxes….what is the framework? You gave every single problem a specific explanation. The framework I see is that your version of SIA is problem-free, counter-intuitive conclusions are always due to something else.
Granted, for open problems like SB or QM it is nearly impossible to convince each other. The productive thing to do would be to try to understand the counter-party’s logic and find out the root of the disagreement. That is why I ended our earlier discussion by making a list of our different positions. So while we may not agree with each other, at least we understand our different assumptions that lead to the disagreement. But looking back at your comments I just realize that is not what you are after. You are here to win. Well, I can’t keep up with this. So…you win. And as always, you will have the final word.
I’ve been trying to understand, but your model appears underspecified and I haven’t been able to get clarification. I’ll try again.
treat perspectives as fundamental axioms
Have you laid out the axioms anywhere? None of the posts I’ve seen go into enough detail for me to be able to independently apply your model.
like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite
This is not clear at all. In this comment you wrote
the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.
In the earlier comment:
from the first-person perspective it is primevally clear the other copy is not me.
I don’t know how these should be interpreted other than implying that you know you’re not a clone (if you’re not). If there’s another interpretation, please clarify. It also seems obviously false, because “I don’t know which person I am among several subjectively indistinguishable persons” is basically tautological.
If MWI does not require perspective-independent reality. Then what is the universal wave function describing?
It’s a model that’s useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don’t postulate reality, because I find the concept incoherent.
But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn’t say much. If you disagree with their conception of CI then my comment doesn’t apply.
Your position that SIA is the “natural choice” and paradox free is a very strong claim.
It seems natural to me, and none of the paradoxes I’ve seen are convincing.
what is the framework
Start with a standard universal prior, plus the assumption that if an entity “exists” in both worlds A and B and world A “exists” with probability P(A) and P(B) for world B, then the relative probability of me “being” that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I “can” be given this knowledge.
Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works—in the end, it spits out probabilities and that’s what gets used.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.
I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you’ve acknowledged that there are some questions your theory will refuse to answer, even though there’s a simple observation that can be done to answer the question. This is highly counter-intuitive to me.
My position has been the same. It starts from one assumption: treat perspectives as fundamental axioms. Then reasoning from different perspectives would not be mixed, indexicals are perspective-based thus primitively identified. So we would not treat I or now as the result of some imaginary random sampling, which has lead to the anthropic debate of SSA and SIA. It has been laid out in the first-post you replied to.
You say I do not respond to issues you raised. It appears I simply cannot do so. Because every time I choose to provide a detailed explanation I got comments such as “but you are not arguing, you are just asserting” or “that is not a substantive argument”, or “I deny that what you say”. You just give judgments without getting into where you think is wrong. And when you do try to get into the reasoning, you didn’t even care what I wrote, like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them. If you do not understand something I said, make me clarify. I would like to answer them because it helps to explain my position and outline our disagreement. (Btw, the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.) But by keep dismissing my effort like above it just seems you are not interested in that. You just want to argue your position is the better one.
Regarding the MWI post. If MWI does not require perspective-independent reality. Then what is the universal wave function describing? Sure, we generally accept this objective reality. And if an interpretation suggests otherwise it is usually deemed its burden to provide such a metaphysics (e.g. like participatory realism by QBism). Sean Carrol regards this as a reason to favor or default to the MWI. I argued if Thomas Nagel’s 3 Steps of how we get the idea of objectivity is correct, then perspective-independent objectivity itself is an assumption. The response I get is that you deny my argument. That’s it. What can I possibly say after that? You want me to understand your model of MWI. But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
You say I am not pointing to SIA supporters having different opinions as you. Because you said you don’t care. And I find it hard to believe when you say you do not know any SIA supporters disagree with your position. For starters, Katja, who brought up SIA doomsday actually argued SIA is preferable to SSA due to perspective disagreements. Michael Titelbaum, a thirder, who give many strong arguments against halfers, listed naive confirmation of MWI as a problem. Your position that SIA is the “natural choice” and paradox free is a very strong claim. ( If you are that confident, maybe make a post about it?) Regarding your framework of solving the paradoxes….what is the framework? You gave every single problem a specific explanation. The framework I see is that your version of SIA is problem-free, counter-intuitive conclusions are always due to something else.
Granted, for open problems like SB or QM it is nearly impossible to convince each other. The productive thing to do would be to try to understand the counter-party’s logic and find out the root of the disagreement. That is why I ended our earlier discussion by making a list of our different positions. So while we may not agree with each other, at least we understand our different assumptions that lead to the disagreement. But looking back at your comments I just realize that is not what you are after. You are here to win. Well, I can’t keep up with this. So…you win. And as always, you will have the final word.
I’ve been trying to understand, but your model appears underspecified and I haven’t been able to get clarification. I’ll try again.
Have you laid out the axioms anywhere? None of the posts I’ve seen go into enough detail for me to be able to independently apply your model.
This is not clear at all. In this comment you wrote
In the earlier comment:
I don’t know how these should be interpreted other than implying that you know you’re not a clone (if you’re not). If there’s another interpretation, please clarify. It also seems obviously false, because “I don’t know which person I am among several subjectively indistinguishable persons” is basically tautological.
It’s a model that’s useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don’t postulate reality, because I find the concept incoherent.
That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn’t say much. If you disagree with their conception of CI then my comment doesn’t apply.
It seems natural to me, and none of the paradoxes I’ve seen are convincing.
Start with a standard universal prior, plus the assumption that if an entity “exists” in both worlds A and B and world A “exists” with probability P(A) and P(B) for world B, then the relative probability of me “being” that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I “can” be given this knowledge.
Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works—in the end, it spits out probabilities and that’s what gets used.
I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you’ve acknowledged that there are some questions your theory will refuse to answer, even though there’s a simple observation that can be done to answer the question. This is highly counter-intuitive to me.