Section 4 then showed how those initial results extend to the case of sequential decision making.
[...]
If she’s a resolute chooser, then sequential decisions reduce to a single non-sequential decisions.
Ah thanks, this clears up most of my confusion, I had misunderstood the intended argument here. I think I can explain my point better now:
I claim that proposition 3, when extended to sequential decisions with a resolute decision theory, shouldn’t be interpreted the way you interpret it. The meaning changes when you make A and B into sequences of actions.
Let’s say action A is a list of 1000000 particular actions (e.g. 1000000 small-edits) and B is a list of 1000000 particular actions (e.g. 1 improve-technology, then 999999 amplified-edits).[1]
Proposition 3 says that A is equally likely to be chosen as B (for randomly sampled desires). This is correct. Intuitively this is because A and B are achieving particular outcomes and desires are equally likely to favor “opposite” outcomes.
However this isn’t the question we care about. We want to know whether action-sequences that contain “improve-technology” are more likely to be optimal than action-sequences that don’t contain “improve-technology”, given a random desire function. This is a very different question to the one proposition 3 gives us an answer to.
Almost all optimal action-sequences could contain “improve-technology” at the beginning, while any two particular action sequences are equally likely to be preferred to the other on average across desires. These two facts don’t contradict each other. The first fact is true in many environments (e.g. the one I described[2]) and this is what we mean by instrumental convergence. The second fact is unrelated to instrumental convergence.
I think the error might be coming from this definition of instrumental convergence:
could we nonetheless say that she’s got a better than 1/n probability of choosing A from a menu of n acts?
When A is a sequence of actions, this definition makes less sense. It’d be better to define it as something like “from a menu of n initial actions, she has a better than 1/n probability of choosing a particular initial action A1”.
I’m not entirely sure what you mean by “model”, but from your use in the penultimate paragraph, I believe you’re talking about a particular decision scenario Sia could find herself in.
Yep, I was using “model” to mean “a simplified representation of a complex real world scenario”.
For simplicity, we can make this scenario a deterministic known environment, and make sure the number of actions available doesn’t change if “improve-technology” is chosen as an action. This way neither of your biases apply.
E.g. we could define a “small-edit” as ±0.01 to any location in the state vector. Then an “amplified-edit” as ±0.1 to any location. This preserves the number of actions, and makes the advantage of “amplified-edit” clear. I can go into more detail if you like, this does depend a little on how we set up the distribution over desires.
Ah thanks, this clears up most of my confusion, I had misunderstood the intended argument here. I think I can explain my point better now:
I claim that proposition 3, when extended to sequential decisions with a resolute decision theory, shouldn’t be interpreted the way you interpret it. The meaning changes when you make A and B into sequences of actions.
Let’s say action A is a list of 1000000 particular actions (e.g. 1000000 small-edits) and B is a list of 1000000 particular actions (e.g. 1 improve-technology, then 999999 amplified-edits).[1]
Proposition 3 says that A is equally likely to be chosen as B (for randomly sampled desires). This is correct. Intuitively this is because A and B are achieving particular outcomes and desires are equally likely to favor “opposite” outcomes.
However this isn’t the question we care about. We want to know whether action-sequences that contain “improve-technology” are more likely to be optimal than action-sequences that don’t contain “improve-technology”, given a random desire function. This is a very different question to the one proposition 3 gives us an answer to.
Almost all optimal action-sequences could contain “improve-technology” at the beginning, while any two particular action sequences are equally likely to be preferred to the other on average across desires. These two facts don’t contradict each other. The first fact is true in many environments (e.g. the one I described[2]) and this is what we mean by instrumental convergence. The second fact is unrelated to instrumental convergence.
I think the error might be coming from this definition of instrumental convergence:
When A is a sequence of actions, this definition makes less sense. It’d be better to define it as something like “from a menu of n initial actions, she has a better than 1/n probability of choosing a particular initial action A1”.
Yep, I was using “model” to mean “a simplified representation of a complex real world scenario”.
For simplicity, we can make this scenario a deterministic known environment, and make sure the number of actions available doesn’t change if “improve-technology” is chosen as an action. This way neither of your biases apply.
E.g. we could define a “small-edit” as ±0.01 to any location in the state vector. Then an “amplified-edit” as ±0.1 to any location. This preserves the number of actions, and makes the advantage of “amplified-edit” clear. I can go into more detail if you like, this does depend a little on how we set up the distribution over desires.