It seems problematic that the boxes are already filled and you are now deciding whether to one box or two box; that your decision can cause the contents of the box to be one way or the other. But it’s not really like that, because your decision has already been determined, even though you don’t know what the decision will be and are even still writing your decision algorithm.
So you are currently in the process of writing your decision algorithm, based on this situation you find yourself in where you need to make a decision. It feels free to you, but of course it isn’t because your “choices” while making the decision algorithm are controlled by other-level functions which are themselves deterministic. These other-level functions “should” compute that their decisions will affect / have affected the outcome and decide to one-box. By “should” I mean that this is best for the decider, but he has no control over how the decision algorithm actually gets written or what the outcome is.
The person that one-boxes will get more money than the person that two-boxes. Since the person has no real choice in the matter, regarding which sort of algorithm they are, you can only say that one is better than the other for getting the most money in this situation, as “should” would imply some kind of free will.
Suppose there is a mutation that causes one species of plants to be hardier than those without the mutation. We would say that one set of plants is better than the other at being hardy. We wouldn’t say that a plant “should” be the hardy variety—they have no choice in the matter.
We are just like plants. Some of us have a better set of states than others for success in whatever. One of these states is an algorithm that computes what to do (makes “choices”) based on predicted consequences of anticipated actions. When we run this algorithm and compute the “chosen action”, it feels on the inside like we are making a choice. It feels like a choice because we input multiple possible actions, and then the algorithm outputs a single action. But really we just happen to have an algorithm that outputs that choice. The algorithm can certainly be modified over time, but the self-modification is deterministic and depends on the environment.
Less Wrong is an environment where we are making the algorithm reflect upon itself. Everyone should consider the possibilities (have an algorithm that one boxes or two boxes?), realize that the one-box algorithm is the one that makes them rich, and update their algorithm accordingly.
Or more precisely: Everyone will consider the possibilities (have an algorithm that one boxes or two boxes?), may realize that having the one-box algorithm is the one that makes them rich, depending on their states, and may update their algorithm accordingly, again depending on their states. People generally have good algorithms, so I predict they will do this unless they have a reason not to. What is that reason?
So should you one box or two box?
It seems problematic that the boxes are already filled and you are now deciding whether to one box or two box; that your decision can cause the contents of the box to be one way or the other. But it’s not really like that, because your decision has already been determined, even though you don’t know what the decision will be and are even still writing your decision algorithm.
So you are currently in the process of writing your decision algorithm, based on this situation you find yourself in where you need to make a decision. It feels free to you, but of course it isn’t because your “choices” while making the decision algorithm are controlled by other-level functions which are themselves deterministic. These other-level functions “should” compute that their decisions will affect / have affected the outcome and decide to one-box. By “should” I mean that this is best for the decider, but he has no control over how the decision algorithm actually gets written or what the outcome is.
The person that one-boxes will get more money than the person that two-boxes. Since the person has no real choice in the matter, regarding which sort of algorithm they are, you can only say that one is better than the other for getting the most money in this situation, as “should” would imply some kind of free will.
Suppose there is a mutation that causes one species of plants to be hardier than those without the mutation. We would say that one set of plants is better than the other at being hardy. We wouldn’t say that a plant “should” be the hardy variety—they have no choice in the matter.
We are just like plants. Some of us have a better set of states than others for success in whatever. One of these states is an algorithm that computes what to do (makes “choices”) based on predicted consequences of anticipated actions. When we run this algorithm and compute the “chosen action”, it feels on the inside like we are making a choice. It feels like a choice because we input multiple possible actions, and then the algorithm outputs a single action. But really we just happen to have an algorithm that outputs that choice. The algorithm can certainly be modified over time, but the self-modification is deterministic and depends on the environment.
Less Wrong is an environment where we are making the algorithm reflect upon itself. Everyone should consider the possibilities (have an algorithm that one boxes or two boxes?), realize that the one-box algorithm is the one that makes them rich, and update their algorithm accordingly.
Or more precisely: Everyone will consider the possibilities (have an algorithm that one boxes or two boxes?), may realize that having the one-box algorithm is the one that makes them rich, depending on their states, and may update their algorithm accordingly, again depending on their states. People generally have good algorithms, so I predict they will do this unless they have a reason not to. What is that reason?