There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they’re largely recycled in transition to a new world. (option 2, then)
I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI’s persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent internal subjectivity. When you read a good book, you usually cannot tell if events and people in it are true or made up; the same goes with skilful improv actor, you cannot tell whether it is a real person or just a persona. In that way they all pass Turing test. However you wouldn’t consider a writer killing a real person, when he ceases to write about some fictional character or an actor killing a real person, when she stops acting.
Of course, you may argue that it makes Waker’s life meaningless, if she is surrounded by pretenders. But it seems silly, her relationship with other people is the same as yours.
My reservations aren’t only moral; they are also psychological: that is, I think it likely (whether or not I am “right” to have the moral reservations I do, whether or not that’s even a meaningful question) that if there were a lot of Wakers, some of them would come to think that they were responsible for billions of deaths, or at least to worry that they might be. And I think that would be a horrific outcome.
When I read a good book, I am not interacting with its characters as I interact with other people in the world. I know how to program a computer to describe a person who doesn’t actually exist in a way indistinguishable from a description of a real ordinary human being. (I.e., take a naturalistic description such as a novelist might write, and just type it into the computer and tell it to write it out again on demand.) The smartest AI researchers on earth are a long way from knowing how to program a computer to behave (in actual interactions) just like an ordinary human being. This is an important difference.
It is at least arguable that emulating someone with enough fidelity to stand up to the kind of inspection our hypothetical “Waker” would be able to give (let’s say) at least dozens of people requires a degree of simulation that would necessarily make those emulated-someones persons. Again, it doesn’t really matter that much whether I’m right, or even whether it’s actually a meaningful question; if a Waker comes to think that it does, then they’re going to be seeing themselves as a mass-murderer.
[EDITED to add: And if our hypothetical Waker doesn’t come to think that, then they’re likely to feel that their entire life involves no real human interaction, which is also very very bad.]
There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they’re largely recycled in transition to a new world. (option 2, then)
I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI’s persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent internal subjectivity. When you read a good book, you usually cannot tell if events and people in it are true or made up; the same goes with skilful improv actor, you cannot tell whether it is a real person or just a persona. In that way they all pass Turing test. However you wouldn’t consider a writer killing a real person, when he ceases to write about some fictional character or an actor killing a real person, when she stops acting.
Of course, you may argue that it makes Waker’s life meaningless, if she is surrounded by pretenders. But it seems silly, her relationship with other people is the same as yours.
My reservations aren’t only moral; they are also psychological: that is, I think it likely (whether or not I am “right” to have the moral reservations I do, whether or not that’s even a meaningful question) that if there were a lot of Wakers, some of them would come to think that they were responsible for billions of deaths, or at least to worry that they might be. And I think that would be a horrific outcome.
When I read a good book, I am not interacting with its characters as I interact with other people in the world. I know how to program a computer to describe a person who doesn’t actually exist in a way indistinguishable from a description of a real ordinary human being. (I.e., take a naturalistic description such as a novelist might write, and just type it into the computer and tell it to write it out again on demand.) The smartest AI researchers on earth are a long way from knowing how to program a computer to behave (in actual interactions) just like an ordinary human being. This is an important difference.
It is at least arguable that emulating someone with enough fidelity to stand up to the kind of inspection our hypothetical “Waker” would be able to give (let’s say) at least dozens of people requires a degree of simulation that would necessarily make those emulated-someones persons. Again, it doesn’t really matter that much whether I’m right, or even whether it’s actually a meaningful question; if a Waker comes to think that it does, then they’re going to be seeing themselves as a mass-murderer.
[EDITED to add: And if our hypothetical Waker doesn’t come to think that, then they’re likely to feel that their entire life involves no real human interaction, which is also very very bad.]