I’ve been trying to understand, but your model appears underspecified and I haven’t been able to get clarification. I’ll try again.
treat perspectives as fundamental axioms
Have you laid out the axioms anywhere? None of the posts I’ve seen go into enough detail for me to be able to independently apply your model.
like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite
This is not clear at all. In this comment you wrote
the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.
In the earlier comment:
from the first-person perspective it is primevally clear the other copy is not me.
I don’t know how these should be interpreted other than implying that you know you’re not a clone (if you’re not). If there’s another interpretation, please clarify. It also seems obviously false, because “I don’t know which person I am among several subjectively indistinguishable persons” is basically tautological.
If MWI does not require perspective-independent reality. Then what is the universal wave function describing?
It’s a model that’s useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don’t postulate reality, because I find the concept incoherent.
But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn’t say much. If you disagree with their conception of CI then my comment doesn’t apply.
Your position that SIA is the “natural choice” and paradox free is a very strong claim.
It seems natural to me, and none of the paradoxes I’ve seen are convincing.
what is the framework
Start with a standard universal prior, plus the assumption that if an entity “exists” in both worlds A and B and world A “exists” with probability P(A) and P(B) for world B, then the relative probability of me “being” that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I “can” be given this knowledge.
Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works—in the end, it spits out probabilities and that’s what gets used.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.
I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you’ve acknowledged that there are some questions your theory will refuse to answer, even though there’s a simple observation that can be done to answer the question. This is highly counter-intuitive to me.
I’ve been trying to understand, but your model appears underspecified and I haven’t been able to get clarification. I’ll try again.
Have you laid out the axioms anywhere? None of the posts I’ve seen go into enough detail for me to be able to independently apply your model.
This is not clear at all. In this comment you wrote
In the earlier comment:
I don’t know how these should be interpreted other than implying that you know you’re not a clone (if you’re not). If there’s another interpretation, please clarify. It also seems obviously false, because “I don’t know which person I am among several subjectively indistinguishable persons” is basically tautological.
It’s a model that’s useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don’t postulate reality, because I find the concept incoherent.
That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn’t say much. If you disagree with their conception of CI then my comment doesn’t apply.
It seems natural to me, and none of the paradoxes I’ve seen are convincing.
Start with a standard universal prior, plus the assumption that if an entity “exists” in both worlds A and B and world A “exists” with probability P(A) and P(B) for world B, then the relative probability of me “being” that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I “can” be given this knowledge.
Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works—in the end, it spits out probabilities and that’s what gets used.
I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you’ve acknowledged that there are some questions your theory will refuse to answer, even though there’s a simple observation that can be done to answer the question. This is highly counter-intuitive to me.