Status quo is one difference, but I don’t see any other prior principles that point to the future biological brain being a (morally) better way of running a human mind forward than using other kinds of implementations of the mind’s algorithm. If we apply a variant of the reversal test to this, a civilization of functionally human uploads should have a reason to become biological, but I don’t think there is a currently known clear reason to prefer that change.
A tree doesn’t simulate a meaningful algorithm, so the analogy would be chopping it down being approximately just as good.
When talking about running algorithms, I’m not making claims about identity or preserving-the-original in some other sense, as I don’t see how these things are morally important, necessarily (I can’t rule out that they might be, on reflection, but currently I don’t see it). What I’m saying is that a biological brain doesn’t have an advantage at the task of running the algorithms of a human mind well, for any sensible notion of running them well. We currently entrust this task to the biological brain, because there is no other choice, and because it’s always been like this. But I don’t see a moral argument there.
That’s like saying a future version of a tree is doing an impression of a continuation of the previous tree.
I don’t understand how the difference isn’t clear here.
Status quo is one difference, but I don’t see any other prior principles that point to the future biological brain being a (morally) better way of running a human mind forward than using other kinds of implementations of the mind’s algorithm. If we apply a variant of the reversal test to this, a civilization of functionally human uploads should have a reason to become biological, but I don’t think there is a currently known clear reason to prefer that change.
The objection is about what, if anything, counts as identity as a matter of fact.
If I take a tree, and I create a computer simulation of that tree, the simulation will not be a way of running the original tree forward at all.
A tree doesn’t simulate a meaningful algorithm, so the analogy would be chopping it down being approximately just as good.
When talking about running algorithms, I’m not making claims about identity or preserving-the-original in some other sense, as I don’t see how these things are morally important, necessarily (I can’t rule out that they might be, on reflection, but currently I don’t see it). What I’m saying is that a biological brain doesn’t have an advantage at the task of running the algorithms of a human mind well, for any sensible notion of running them well. We currently entrust this task to the biological brain, because there is no other choice, and because it’s always been like this. But I don’t see a moral argument there.