(Assuming a frame of materialism, physicalism, empiricism throughout even if not explicitly stated)
Some of your scenarios that you’re describing as objectionable would reasonably be described as emulation in an environment that you would probably find disagreeable even within the framework of this post. Being emulated by a contraption of pipes and valves that’s worse in every way than my current wetware is, yeah, disagreeable even if it’s kinda me. Making my hardware less reliable is bad. Making me think slower is bad. Making it easier for others to tamper with my sensors is bad. All of these things are bad even if the computation faithfully represents me otherwise.
I’m mostly in the same camp as Rob here, but there’s plenty left to worry about in these scenarios even if you don’t think brain-quantum-special-sauce (or even weirder new physics) is going to make people-copying fundamentally impossible. Being an upload of you that now needs to worry about being paused at any time or having false sensory input supplied is objectively a worse position to be in in.
The evidence does seem to lean in the direction that non-classical effects in the brain are unlikely, neurons are just too big for quantum effects between neurons, and even if there were quantum effects within neurons, it’s hard to imagine them being stable for even as long as a single train of thought. The copy losing their train of thought and having momentary confusion doesn’t seem to reach the bar where they don’t count as the same person? And yet weirder new physics mostly requires experiments we haven’t thought to do yet, or experiments is regimes we’ve not yet been able to test. Whereas the behavior of things at STP in water is about as central to things-Science-has-pinned-down as you’re going to get.
You seem to hold that the universe maybe still has a lot of important surprises in store, even within the central subject matter of century old fields? Do you have any kind of intuition pump for that feeling there’s still that many earth-shattering surprises left (while simultaneously holding empiricism and science mostly work)? My sense of where there’s likely to be surprises left is not quite so expansive and this sounds like a crux for a lot of people. Even as much of a shock as qm was to physics, it didn’t invalidate much if any theory except in directly adjacent fields like chemistry and optics. And working out the finer points had progressively more narrower and shorter reaching impact. I can’t think of examples of surprises with a larger blast radius within the history of vaguely modern science. Findings of odd as yet unexplained effects pretty consistently precedes attempts at theory. Empirically determined rules don’t start working any worse when we realize the explanation given with them was wrong.
Keep in mind that society holds that you’re still you even after a non-trivial amount of head trauma. So whatever amount of imperfection in copying your unknown-unknowns cause, it’d have to both be something we’ve never noticed before in a highly studied area, and something more disruptive than getting clocked in the jaw, which seems a tall order.
Keep in mind also that the description(s) of computation that computer science has worked out is extremely broad and far from limited to just electronic circuits. Electronics are pervasive because we have as a society sunk the world GDP (possibly several times over) into figuring out how to make them cheaply at scale. Capital investment is the only thing special about computers realized in silicon. Computer science makes no such distinction. The notion of computation is so broad that there’s little if any room to conceive of an agent that’s doing something that can’t be described as computation. Likewise the equivalence proofs are quite broad; it can arbitrarily expensive to translate across architectures, but within each class of computers, computation is computation, and that emulation is possible has proofs.
All of your examples are doing that thing where you have a privileged observer position separate and apart from anything that could be seeing or thinking within the experiment. You-the-thinker can’t simply step into the thought experiment. You-the-thinker can of course decide where to attach the camera by fiat, but that doesn’t tell us anything about the experiment, just about you and what you find intuitive.
Suppose for sake of argument your unknown unknowns mean your copy wakes up with a splitting headache and amnesia for the previous ~12 hours as if waking up from surgery. They otherwise remember everything else you remember and share your personality such that no one could notice a difference (we are positing a copy machine that more or less works). If they’re not you they have no idea who else they could be, considering they only remember being you.
The above doesn’t change much for me, and I don’t think I’d concede much more without saying you’re positing a machine that just doesn’t work very well. It’s easy for me to imagine it never being practical to copy or upload a mind, or having modest imperfections or minor differences in experience, especially at any kind of scale. Or simply being something society at large is never comfortable pursuing. It’s a lot harder to imagine it being impossible even in principle with what we already know, or can already rule out with fairly high likelihood. I don’t think most of the philosophy changes all that much if you consider merely very good copying (your friends and family can’t tell the difference; knows everything you know) vs perfect copying.
The most bullish folks on LLMs seem to think we’re going to be able to make copies good enough to be useful to businesses just off all your communications. I’m not nearly so impressed with the capabilities I’ve seen to date and it’s probably just hype. But we are already getting into an uncanny valley with the (very) low fidelity copies current AI tech can spit out—which is to say they’re already treading on the outer edge of peoples’ sense of self.
(Assuming a frame of materialism, physicalism, empiricism throughout even if not explicitly stated)
Some of your scenarios that you’re describing as objectionable would reasonably be described as emulation in an environment that you would probably find disagreeable even within the framework of this post. Being emulated by a contraption of pipes and valves that’s worse in every way than my current wetware is, yeah, disagreeable even if it’s kinda me. Making my hardware less reliable is bad. Making me think slower is bad. Making it easier for others to tamper with my sensors is bad. All of these things are bad even if the computation faithfully represents me otherwise.
I’m mostly in the same camp as Rob here, but there’s plenty left to worry about in these scenarios even if you don’t think brain-quantum-special-sauce (or even weirder new physics) is going to make people-copying fundamentally impossible. Being an upload of you that now needs to worry about being paused at any time or having false sensory input supplied is objectively a worse position to be in in.
The evidence does seem to lean in the direction that non-classical effects in the brain are unlikely, neurons are just too big for quantum effects between neurons, and even if there were quantum effects within neurons, it’s hard to imagine them being stable for even as long as a single train of thought. The copy losing their train of thought and having momentary confusion doesn’t seem to reach the bar where they don’t count as the same person? And yet weirder new physics mostly requires experiments we haven’t thought to do yet, or experiments is regimes we’ve not yet been able to test. Whereas the behavior of things at STP in water is about as central to things-Science-has-pinned-down as you’re going to get.
You seem to hold that the universe maybe still has a lot of important surprises in store, even within the central subject matter of century old fields? Do you have any kind of intuition pump for that feeling there’s still that many earth-shattering surprises left (while simultaneously holding empiricism and science mostly work)? My sense of where there’s likely to be surprises left is not quite so expansive and this sounds like a crux for a lot of people. Even as much of a shock as qm was to physics, it didn’t invalidate much if any theory except in directly adjacent fields like chemistry and optics. And working out the finer points had progressively more narrower and shorter reaching impact. I can’t think of examples of surprises with a larger blast radius within the history of vaguely modern science. Findings of odd as yet unexplained effects pretty consistently precedes attempts at theory. Empirically determined rules don’t start working any worse when we realize the explanation given with them was wrong.
Keep in mind that society holds that you’re still you even after a non-trivial amount of head trauma. So whatever amount of imperfection in copying your unknown-unknowns cause, it’d have to both be something we’ve never noticed before in a highly studied area, and something more disruptive than getting clocked in the jaw, which seems a tall order.
Keep in mind also that the description(s) of computation that computer science has worked out is extremely broad and far from limited to just electronic circuits. Electronics are pervasive because we have as a society sunk the world GDP (possibly several times over) into figuring out how to make them cheaply at scale. Capital investment is the only thing special about computers realized in silicon. Computer science makes no such distinction. The notion of computation is so broad that there’s little if any room to conceive of an agent that’s doing something that can’t be described as computation. Likewise the equivalence proofs are quite broad; it can arbitrarily expensive to translate across architectures, but within each class of computers, computation is computation, and that emulation is possible has proofs.
All of your examples are doing that thing where you have a privileged observer position separate and apart from anything that could be seeing or thinking within the experiment. You-the-thinker can’t simply step into the thought experiment. You-the-thinker can of course decide where to attach the camera by fiat, but that doesn’t tell us anything about the experiment, just about you and what you find intuitive.
Suppose for sake of argument your unknown unknowns mean your copy wakes up with a splitting headache and amnesia for the previous ~12 hours as if waking up from surgery. They otherwise remember everything else you remember and share your personality such that no one could notice a difference (we are positing a copy machine that more or less works). If they’re not you they have no idea who else they could be, considering they only remember being you.
The above doesn’t change much for me, and I don’t think I’d concede much more without saying you’re positing a machine that just doesn’t work very well. It’s easy for me to imagine it never being practical to copy or upload a mind, or having modest imperfections or minor differences in experience, especially at any kind of scale. Or simply being something society at large is never comfortable pursuing. It’s a lot harder to imagine it being impossible even in principle with what we already know, or can already rule out with fairly high likelihood. I don’t think most of the philosophy changes all that much if you consider merely very good copying (your friends and family can’t tell the difference; knows everything you know) vs perfect copying.
The most bullish folks on LLMs seem to think we’re going to be able to make copies good enough to be useful to businesses just off all your communications. I’m not nearly so impressed with the capabilities I’ve seen to date and it’s probably just hype. But we are already getting into an uncanny valley with the (very) low fidelity copies current AI tech can spit out—which is to say they’re already treading on the outer edge of peoples’ sense of self.