This, is perhaps related to my favoring the unorthodox 1⁄2 answer to the Sleeping Beauty problem but is anyone else pretty sure that simulating a suffering person doesn’t change the amount of suffering in the world? This is not an argument that “simulations don’t have feelings”—I just think that the number of copies of you doesn’t have moral significance (so long as that number is at least 1). I’m pretty happy right now—I don’t think the world would be improved significantly if there were a server somewhere running a few hundred exact copies of my brain state and sensory input. I consider my identity to include all exactly similar simulations of me and the quantity of those simulations in no way impacts my utility function (until you put us in a decision problem where how many copies of me actually matters). I am not concerned about token persons I’m concerned about the types. What people care about is that there be some future instantiation of themselves and that that instantiation be happy.
Historical suffering already happened and making copies of it doesn’t make it worse (why would the time at which a program is run possibly matter morally?). Moreover, it’s not clear why the fact that historical people no longer exist should make a bit of difference in our wanting to help them. In a timeless sense they will always be suffering—what we can do is instantiate an experienced end to that suffering (a peaceful afterlife).
If you combine this with a Big World (e.g. eternal inflation) where all minds get instantiated then nothing matters. But you would still care about what happens even if you believed this is a Big World.
Why shouldn’t we be open to the possibility that a Big World renders all attempts at consequentially altruistic behavior meaningless?
Even if I’m wrong that single instantiation is all that matters it seems plausible that what we should be concerned with is not the frequency with which happy minds are instantiated but the proportion of “futures” it which suffering has been relieved.
Hmm. I don’t really disagree that qualia is dupilicated, it’s more that I’m not sure I care about qualia instantiations rather than types of qualia (confusing this, of course, is uncertainty about what is meant by qualia). His ethical arguments I find pretty unpersuasive but the epistemological argument requires more unpacking.
This, is perhaps related to my favoring the unorthodox 1⁄2 answer to the Sleeping Beauty problem but is anyone else pretty sure that simulating a suffering person doesn’t change the amount of suffering in the world? This is not an argument that “simulations don’t have feelings”—I just think that the number of copies of you doesn’t have moral significance (so long as that number is at least 1). I’m pretty happy right now—I don’t think the world would be improved significantly if there were a server somewhere running a few hundred exact copies of my brain state and sensory input. I consider my identity to include all exactly similar simulations of me and the quantity of those simulations in no way impacts my utility function (until you put us in a decision problem where how many copies of me actually matters). I am not concerned about token persons I’m concerned about the types. What people care about is that there be some future instantiation of themselves and that that instantiation be happy.
Historical suffering already happened and making copies of it doesn’t make it worse (why would the time at which a program is run possibly matter morally?). Moreover, it’s not clear why the fact that historical people no longer exist should make a bit of difference in our wanting to help them. In a timeless sense they will always be suffering—what we can do is instantiate an experienced end to that suffering (a peaceful afterlife).
If you combine this with a Big World (e.g. eternal inflation) where all minds get instantiated then nothing matters. But you would still care about what happens even if you believed this is a Big World.
Why shouldn’t we be open to the possibility that a Big World renders all attempts at consequentially altruistic behavior meaningless?
Even if I’m wrong that single instantiation is all that matters it seems plausible that what we should be concerned with is not the frequency with which happy minds are instantiated but the proportion of “futures” it which suffering has been relieved.
Nick Bostrom disagrees.
Hmm. I don’t really disagree that qualia is dupilicated, it’s more that I’m not sure I care about qualia instantiations rather than types of qualia (confusing this, of course, is uncertainty about what is meant by qualia). His ethical arguments I find pretty unpersuasive but the epistemological argument requires more unpacking.