Note that if you subscribe to MWI, the whole thing is completely deterministic, and so you can’t decide to pour different amounts of this “existence juice” into different “branches” by making smarter decisions about AI research. The outcome was predetermined at the time the universe was created. All you do is LARP until the reality reveals itself.
i don’t think determinism is incompatible with making decisions, just like nondeterminism doesn’t mean my decisions are “up to randomness”; from my perspective, i can either choose to do action A or action B, and from my perspective i actually get to steer the world towards what those action lead to.
this is a world that gets steered towards the values of agents who LARP
That’s the part that makes no sense to me. (Neither does compatibilism, to be honest, which to me has little to do with embedded agency.) Seems like the causality error you point is in a wrong direction: your LARPing and the outcomes have a common cause, but there is no “if we do this, the world ends up like that” in the “territory”. Anyway, seems like a tired old debate, probably not worth it.
yeah, not arguing, but people tend to think about probabilistic evolution as “not set in stone” and potentially influenced by our actions. There is no out like that for the completely deterministic world.
I don’t think this matters all that much. In Newcomb’s problem, even though your decision is predetermined, you should still want to act as if you can affect the past, specifically Omega’s prediction.
There is no “ought” or “should” in a deterministic world of perfect predictors. There is only “is”. You are an algorithm and Omega knows how you will act. Your inner world is an artifact that gives you an illusion of decision making. The division is simple: one-boxers win, two-boxers lose, the thought process that leads to the action is irrelevant.
One-boxers win because they reasoned in their head that one-boxers win because of updateless decision theory or something so they “should” be a one-boxer. The decision is predetermined but the reasoning acts like it has a choice in the matter (and people who act like they have a choice in the matter win.) What carado is saying is that people who act like they can move around the realityfluid tend to win more, just like how people who act like they have a choice in Newcomb’s problem and one-box in Newcomb’s problem win even though they don’t have a choice in the matter.
None of this is relevant. I don’t like the “realityfluid” metaphor, either. You win because you like the number 1 more than number 2, or because you cannot count past 1, or because you have a fancy updateless model of the world, or because you have a completely wrong model of the world which nonetheless makes you one-box. You don’t need to “act like you have a choice” at all.
The difference between an expected utility maximizer using updateless decision theory and an entity who likes the number 1 more than the number 2, or who cannot count past 1, or who has a completely wrong model of the world which nonetheless makes it one-box is that the expected utility maximizer using updateless decision theory wins in scenarios outside of Newcomb’s problem where you may have to choose to $2 instead of $1, or have to count amounts of objects larger than 1, or have to believe true things. Similarly, an entity that “acts like they have a choice” generalizes well to other scenarios whereas these other possible entities don’t.
Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like “an entity that “acts like they have a choice”″ does not generalize well, since “acting like you have choice” leads you to CDT and two-boxing.
Note that if you subscribe to MWI, the whole thing is completely deterministic, and so you can’t decide to pour different amounts of this “existence juice” into different “branches” by making smarter decisions about AI research. The outcome was predetermined at the time the universe was created. All you do is LARP until the reality reveals itself.
i don’t think determinism is incompatible with making decisions, just like nondeterminism doesn’t mean my decisions are “up to randomness”; from my perspective, i can either choose to do action A or action B, and from my perspective i actually get to steer the world towards what those action lead to.
put another way, i’m a compatibilist; i implement embedded agency.
put another way, yes i LARP, and this is a world that gets steered towards the values of agents who LARP, so yay.
That’s the part that makes no sense to me. (Neither does compatibilism, to be honest, which to me has little to do with embedded agency.) Seems like the causality error you point is in a wrong direction: your LARPing and the outcomes have a common cause, but there is no “if we do this, the world ends up like that” in the “territory”. Anyway, seems like a tired old debate, probably not worth it.
The same, modulo a few coinflips, is true for the collapse interpretations.
yeah, not arguing, but people tend to think about probabilistic evolution as “not set in stone” and potentially influenced by our actions. There is no out like that for the completely deterministic world.
I don’t think this matters all that much. In Newcomb’s problem, even though your decision is predetermined, you should still want to act as if you can affect the past, specifically Omega’s prediction.
There is no “ought” or “should” in a deterministic world of perfect predictors. There is only “is”. You are an algorithm and Omega knows how you will act. Your inner world is an artifact that gives you an illusion of decision making. The division is simple: one-boxers win, two-boxers lose, the thought process that leads to the action is irrelevant.
One-boxers win because they reasoned in their head that one-boxers win because of updateless decision theory or something so they “should” be a one-boxer. The decision is predetermined but the reasoning acts like it has a choice in the matter (and people who act like they have a choice in the matter win.) What carado is saying is that people who act like they can move around the realityfluid tend to win more, just like how people who act like they have a choice in Newcomb’s problem and one-box in Newcomb’s problem win even though they don’t have a choice in the matter.
None of this is relevant. I don’t like the “realityfluid” metaphor, either. You win because you like the number 1 more than number 2, or because you cannot count past 1, or because you have a fancy updateless model of the world, or because you have a completely wrong model of the world which nonetheless makes you one-box. You don’t need to “act like you have a choice” at all.
The difference between an expected utility maximizer using updateless decision theory and an entity who likes the number 1 more than the number 2, or who cannot count past 1, or who has a completely wrong model of the world which nonetheless makes it one-box is that the expected utility maximizer using updateless decision theory wins in scenarios outside of Newcomb’s problem where you may have to choose to $2 instead of $1, or have to count amounts of objects larger than 1, or have to believe true things. Similarly, an entity that “acts like they have a choice” generalizes well to other scenarios whereas these other possible entities don’t.
Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like “an entity that “acts like they have a choice”″ does not generalize well, since “acting like you have choice” leads you to CDT and two-boxing.