(1) Well, that’s the funny thing about “should”: if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, “how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to ‘find itself’ in a possible world with twice as many copies?”—then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2) Yes, certainly it’s possible that we’re all living in a digital simulation—in which case, maybe we’re uncopyable from within the simulation, but copyable by someone outside the simulation with “sysadmin access.” But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It’s no different than with any “ordinary” scientific question.
(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don’t think we can determine the truth of that possibility by introspection—or for that matter, even by “extrospection”! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?
(1) Well, that’s the funny thing about “should”: if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, “how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to ‘find itself’ in a possible world with twice as many copies?”—then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2) Yes, certainly it’s possible that we’re all living in a digital simulation—in which case, maybe we’re uncopyable from within the simulation, but copyable by someone outside the simulation with “sysadmin access.” But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It’s no different than with any “ordinary” scientific question.
(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don’t think we can determine the truth of that possibility by introspection—or for that matter, even by “extrospection”! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?