I would say that the concept of probability works fine in anthropic scenarios, or at least there is a well defined number that is equal to probability in non anthropic situations. This number is assigned to “worlds as a whole”. Sleeping beauty assigns 1⁄2 to heads, and 1⁄2 to tails, and can’t meaningfully split the tails case depending on the day. Sleeping beauty is a functional decision theory agent. For each action A, they consider the logical counterfactual that the algorithm they are implementing returned A, then calculate the worlds utility in that counterfactual. They then return whichever action maximizes utility.
In this framework, “which version am I?” is a meaningless question, you are the algorithm. The fact that the algorithm is implemented in a physical substrate give you means to affect the world. Under this model, whether or not your running on multiple redundant substrates is irrelivant. You reason about the universe without making any anthropic updates. As you have no way of affecting a universe that doesn’t contain you, or someone reasoning about what you would do, you might as well behave as if you aren’t in one. You can make the efficiency saving of not bothering to simulate such a world.
You might, or might not have an easier time effecting a world that contains multiple copies of you.
“I would say that the concept of probability works fine in anthropic scenarios”—I agree that you can build a notion of probability on top of a viable anthropic decision theory. I guess I was making two points a) you often don’t need to b) there isn’t a unique notion of probability, but it depends on the payoffs (which disagrees with what you wrote, although the disagreement may be more definitional than substantive)
“As you have no way of affecting a universe that doesn’t contain you, or someone reasoning about what you would do, you might as well behave as if you aren’t in one”—anthropics isn’t just about existence/non-existence. Under some models there will be more agents experiencing your current situation.
“You might, or might not have an easier time effecting a world that contains multiple copies of you”—You probably can, but this is unrelated to anthropics
I would say that the concept of probability works fine in anthropic scenarios, or at least there is a well defined number that is equal to probability in non anthropic situations. This number is assigned to “worlds as a whole”. Sleeping beauty assigns 1⁄2 to heads, and 1⁄2 to tails, and can’t meaningfully split the tails case depending on the day. Sleeping beauty is a functional decision theory agent. For each action A, they consider the logical counterfactual that the algorithm they are implementing returned A, then calculate the worlds utility in that counterfactual. They then return whichever action maximizes utility.
In this framework, “which version am I?” is a meaningless question, you are the algorithm. The fact that the algorithm is implemented in a physical substrate give you means to affect the world. Under this model, whether or not your running on multiple redundant substrates is irrelivant. You reason about the universe without making any anthropic updates. As you have no way of affecting a universe that doesn’t contain you, or someone reasoning about what you would do, you might as well behave as if you aren’t in one. You can make the efficiency saving of not bothering to simulate such a world.
You might, or might not have an easier time effecting a world that contains multiple copies of you.
“I would say that the concept of probability works fine in anthropic scenarios”—I agree that you can build a notion of probability on top of a viable anthropic decision theory. I guess I was making two points a) you often don’t need to b) there isn’t a unique notion of probability, but it depends on the payoffs (which disagrees with what you wrote, although the disagreement may be more definitional than substantive)
“As you have no way of affecting a universe that doesn’t contain you, or someone reasoning about what you would do, you might as well behave as if you aren’t in one”—anthropics isn’t just about existence/non-existence. Under some models there will be more agents experiencing your current situation.
“You might, or might not have an easier time effecting a world that contains multiple copies of you”—You probably can, but this is unrelated to anthropics