Interesting example. Tangentially I’m guessing believing in substrate dependence is part of some folks’ visceral dislike of Richard Ngo’s story The Gentle Romance, which was meant to be utopian. I mostly lean against substrate dependence and so don’t find your example persuasive, although Scott Aaronson’s monstrous edge cases do give me pause:
what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body?
If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness?
Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?
To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?
Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker. In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry. What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point. So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc. But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.
You can probably see where this is going. What if we homomorphically encrypted a simulation of your brain? And what if we hid the only copy of the decryption key, let’s say in another galaxy? Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?
Obviously you’re not obliged to, but if you ever get round to looking into the GDM paper more deeply like you mentioned I’d be interested in what you have to say, as you might change my opinion on it.
Interesting example. Tangentially I’m guessing believing in substrate dependence is part of some folks’ visceral dislike of Richard Ngo’s story The Gentle Romance, which was meant to be utopian. I mostly lean against substrate dependence and so don’t find your example persuasive, although Scott Aaronson’s monstrous edge cases do give me pause:
Obviously you’re not obliged to, but if you ever get round to looking into the GDM paper more deeply like you mentioned I’d be interested in what you have to say, as you might change my opinion on it.