I haven’t classified that, because I haven’t used the anthropic reasoning. I was just trying to figure out what, from a timeless perspective, the “correct” decision must be.
Then afterwards I’ll think of methods of reaching that decision. Though it seems that using your probablity estimate (known as SIA) and a “division of responsability” method, we get the answers presented here.
Well, you haven’t classified it yet :D But it seems like it would be type 4. Yet it would produce different results than type four (crosses), for the obvious reason that it estimates the probabilities out in front differently. This is notably better by the measure that if all agents built crosses, it would be better overall than if all agents built boxes, so the expected utility per agent is larger. Basically, types 1 and 4 become unified if the agent can’t tell between its different possible circumstances, because 1 maximizes utility of all agents and 4 maximizes expected utility of that agent, which is just (#1)/N.
I haven’t classified that, because I haven’t used the anthropic reasoning. I was just trying to figure out what, from a timeless perspective, the “correct” decision must be.
Then afterwards I’ll think of methods of reaching that decision. Though it seems that using your probablity estimate (known as SIA) and a “division of responsability” method, we get the answers presented here.
Well, you haven’t classified it yet :D But it seems like it would be type 4. Yet it would produce different results than type four (crosses), for the obvious reason that it estimates the probabilities out in front differently. This is notably better by the measure that if all agents built crosses, it would be better overall than if all agents built boxes, so the expected utility per agent is larger. Basically, types 1 and 4 become unified if the agent can’t tell between its different possible circumstances, because 1 maximizes utility of all agents and 4 maximizes expected utility of that agent, which is just (#1)/N.