Humans can be recognized inductively: Pick a time such as the present when it is not common to manipulate genomes. Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.
Or maybe just say that the humans are the genetic humans at the start time, and that’s all. Caring for the initial set of humans should lead to caring for their descendants because humans care about their descendants, so if you’re doing FAI you’re done. If you want to recognize humans for some other purpose this may not be sufficient.
Predicting human behavior seems harder than recognizing humans, so it seems to me that you’re presupposing the solution of a hard problem in order to solve an easy problem.
An entirely separate problem is that if you train to discover what humans would do in one situation and then stop training and then use the trained inference scheme in new situations, you’re open to the objection that the new situations might be outside the domain covered by the original training.
Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.
That seems very hard! For instance, does that not qualify molar pregnancies as people, twins as one person and chimeras as two? And it’s hard to preclude manipulations that future humans (or AIs) may be capable of.
Or maybe just say that the humans are the genetic humans at the start time, and that’s all.
Easier, but still a challenge. You need to identify a person with the “same” person at a later date—but not, for instance, with list skin cells or amputated limbs. What of clones, if we’re using genetics?
It seems to me that identifying people imperfectly (a “crude measure”, essentially http://lesswrong.com/lw/ly9/crude_measures/ ) is easier and safer than modelling people imperfectly. But doing it throughly, then the model seems better, and less vulnerable to unexpected edge cases.
But the essence of the idea is to exploit something that a superintelligent AI will be doing anyway. We could similarly try and use any “human identification” algorithm the AI would be using anyway.
Humans can be recognized inductively: Pick a time such as the present when it is not common to manipulate genomes. Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.
Or maybe just say that the humans are the genetic humans at the start time, and that’s all. Caring for the initial set of humans should lead to caring for their descendants because humans care about their descendants, so if you’re doing FAI you’re done. If you want to recognize humans for some other purpose this may not be sufficient.
Predicting human behavior seems harder than recognizing humans, so it seems to me that you’re presupposing the solution of a hard problem in order to solve an easy problem.
An entirely separate problem is that if you train to discover what humans would do in one situation and then stop training and then use the trained inference scheme in new situations, you’re open to the objection that the new situations might be outside the domain covered by the original training.
That seems very hard! For instance, does that not qualify molar pregnancies as people, twins as one person and chimeras as two? And it’s hard to preclude manipulations that future humans (or AIs) may be capable of.
Easier, but still a challenge. You need to identify a person with the “same” person at a later date—but not, for instance, with list skin cells or amputated limbs. What of clones, if we’re using genetics?
It seems to me that identifying people imperfectly (a “crude measure”, essentially http://lesswrong.com/lw/ly9/crude_measures/ ) is easier and safer than modelling people imperfectly. But doing it throughly, then the model seems better, and less vulnerable to unexpected edge cases.
But the essence of the idea is to exploit something that a superintelligent AI will be doing anyway. We could similarly try and use any “human identification” algorithm the AI would be using anyway.