Double-thick transistors and other subjective phenomena

If I’m running on a silicon computer, do I have twice as much subjective experience if my computer is twice as thick?

Why is this even a good question?

Consider a computer that was printed on a flat sheet. If we stick two of these computers (one a mirror image) together face to face, we get a thicker computer. And then if we peel them apart again, we get two thin computers! Suppose that we simulate a person using these computers. It makes sense that a person running on two thin computers has twice as much “experience” as a person running on just one (for example, in the Sleeping Beauty problem, the correct betting strategy is to bet as if the probability of making the bet in a given world is proportional to the number of thin computers). So if we take two people-computers and stick them together into one thicker person-computer, the thicker person contains twice as much “experience” as a thinner one—each of their halves has as much experience as a thin person, so they have twice as much experience.

Do I disagree? Well, I think it depends somewhat on how you cash out “experience.” Consider the Sleeping Beauty problem with these computers—in the classic version, our person is asked to give their probability that they’re in the possibility where there’s one thin computer, or the world where there are two thin computers. The correct betting strategy is to bet as if you think the probability that there are two computers is 23 - weighting each computer equally.

Now, consider altering the experiment so that either there’s one thin computer, or one double computer. We have two possibilities—either the correct betting probability is 12 and the computers seem to have equal “experience”, or we bite the bullet and say that the correct betting probability is 23 for a double computer, 1011 for a 10x thicker computer, 1000/​1001 for a 1000x thicker computer, etc.

The bullet-biting scenario is equivalent to saying that the selfish desires of the twice-thick computer are twice as important. If one computer is one person, a double computer is then two people in a box.

But of course, if you have a box with two people in it, you can knock on the side and go “hey, how many of you people are in there? I’m putting in an order for chinese food, how many entrees should I get?” Instead, the double-thick computer is running exactly the same program as the thin computer, and will order exactly the same number of entrees. In particular, a double-thick computer will make evaluations of selfish vs. altruistic priorities exactly the same as a thin computer.

There is one exception to the previous paragraph—what if the computer is programmed to care about its own thickness, and measure it with external instruments since introspection won’t do, and weight its desires more when it’s thicker? This is certainly possible, but by putting the caring straight into the utility function, it removes any possibility that the caring is some mysterious “experience.” It’s just a term in the utility function—it doesn’t have to be there, in fact by default it’s not. Or, heck, your robot might as easily care more about things when the tides are high, that doesn’t mean that high tides grant “experience.”

The original Sleeping Beauty problem, now *that’s* mysterious “experience.” Ordinary computers enter, weighting the possibility by the number of computers leaves. So something happens when you merge the two computers into a double computer, to destroy that experience rather than conserving it.

What do I claim explains this? The simple fact that you only offer the double computer one bet, not two. Sure, the exact same signals go to the exact same wires in each case. Except for the prior information that says that the experimenter can only make 1 bet, not 2. In this sense, “experience” just comes from the ways in which our computer can interact with the world.

So since the a double-thick computer is not more selfish than a thin one (neglecting the tides), and will not expect to be a thick computer more often in the Sleeping Beauty problem, I’d say it doesn’t have more “experience” than a thin computer.

EDIT: I use betting behavior as a proxy for probability here because it’s easy to see which answer is correct. However, using betting behavior as a probability is not always valid—e.g. in the absent-minded driver problem. In the sleeping beauty case it only works because the payout structure is very simple. A safer way would be to derive the probabilities from the information available to the agents, which has been done elsewhere, but is harder to follow.