How does one make a non-arbitrary division between a human-capable AGI and say a human upload?
All divisions are arbitrary. But there are some really good differences in things like past history, properties of mental algorithms, value system, and relationships to other humans that make it reasonable to clump human minds together, separately from AIs.
With enough work, these could presumably be overcome, with some sort of “AI child” project. But it would be a hard problem relative to making an AI that takes over the world.
Perhaps all division are arbitrary, but let’s unpack those really good differences . ..
Any plausible AI design is going to have a learning period and thus a history, correct? Does ‘humanness’ require that your history involved growing up in a house with a yard and a dog?
Mental algorithms differences are largely irrelevant. Even when considering only algorithmic differences that actually result in functional behavior changes, there’s a vast difference in the scope of internal algorithms different human brains use or can learn to use and a wide spectrum in capabilities. It’s also rather silly to define humanness around some performance barrier.
Value systems are important sure, but crucially we actually want future AGIs to have human values! So if values are important for humanness, as you claim, this only supports the proposition.
Relationships to other humans—during training/learning/childhood the AGI is presumably going to be interacting with human adult caretakers/monitors. The current systems in progress, such as OpenCOG, are certainly not trying to build an AGI in complete isolation from humans. Likewise, would a homo sapien child who grows up in complete isolation from other humans and learns only through computers and books not be human?
Does ‘humanness’ require that your history involved growing up in a house with a yard and a dog?
So first, you have to understand that human definitions are fuzzy. That, is, when you say “require,” you are not going about this the right way. For this I recommend Yvain’s post on disease.
As for the substance, I think “growing up” is pretty human. We tend to all follow a similar sort of development. And again, remember that definitions are fuzzy. Someone whose brain never develops after they’re born isn’t necessarily “not human,” they are just much farther form the human norm.
Mental algorithms differences are largely irrelevant.
Insert Searle’s chinese room argument here.
but crucially we actually want future AGIs to have human values!
SIAI doesn’t, at least. When making an AI to self-improve to superintelligence, why make it so that it gets horny?
would a homo sapien child who grows up in complete isolation from other humans and learns only through computers and books not be human?
Didn’t you talk about this in your post, with the child raised by wolves? Relationships with humans are vital for certain sorts of brain development, so an isolated child is much farther from the human norm.
but crucially we actually want future AGIs to have human values!
SIAI doesn’t, at least. When making an AI to self-improve to superintelligence, why make it so that it gets horny?
Exactly human values, not analogous to human values. So if humans value having sex but don’t value FAI having sex then the FAI will value humans having sex but not value having sex itself.
All divisions are arbitrary. But there are some really good differences in things like past history, properties of mental algorithms, value system, and relationships to other humans that make it reasonable to clump human minds together, separately from AIs.
With enough work, these could presumably be overcome, with some sort of “AI child” project. But it would be a hard problem relative to making an AI that takes over the world.
Perhaps all division are arbitrary, but let’s unpack those really good differences . ..
Any plausible AI design is going to have a learning period and thus a history, correct? Does ‘humanness’ require that your history involved growing up in a house with a yard and a dog?
Mental algorithms differences are largely irrelevant. Even when considering only algorithmic differences that actually result in functional behavior changes, there’s a vast difference in the scope of internal algorithms different human brains use or can learn to use and a wide spectrum in capabilities. It’s also rather silly to define humanness around some performance barrier.
Value systems are important sure, but crucially we actually want future AGIs to have human values! So if values are important for humanness, as you claim, this only supports the proposition.
Relationships to other humans—during training/learning/childhood the AGI is presumably going to be interacting with human adult caretakers/monitors. The current systems in progress, such as OpenCOG, are certainly not trying to build an AGI in complete isolation from humans. Likewise, would a homo sapien child who grows up in complete isolation from other humans and learns only through computers and books not be human?
So first, you have to understand that human definitions are fuzzy. That, is, when you say “require,” you are not going about this the right way. For this I recommend Yvain’s post on disease.
As for the substance, I think “growing up” is pretty human. We tend to all follow a similar sort of development. And again, remember that definitions are fuzzy. Someone whose brain never develops after they’re born isn’t necessarily “not human,” they are just much farther form the human norm.
Insert Searle’s chinese room argument here.
SIAI doesn’t, at least. When making an AI to self-improve to superintelligence, why make it so that it gets horny?
Didn’t you talk about this in your post, with the child raised by wolves? Relationships with humans are vital for certain sorts of brain development, so an isolated child is much farther from the human norm.
Exactly human values, not analogous to human values. So if humans value having sex but don’t value FAI having sex then the FAI will value humans having sex but not value having sex itself.