Man, I’m reacting to an entire genre of thought, not just this post exactly, so apologies for combination unkindness and inaccuracy, but I think it’s barking up the wrong tree to worry about whether AIs will have the Stuff or not. Pain perception, consciousness, moral patiency, these are things that are all-or-nothing-ish for humans, in our everyday experience of the everyday world. But there is no Stuff underlying them, such that things either have the Stuff or don’t have the Stuff—no Platonic-realm enforcement of this all-or-nothing-ish-ness. They’re just patterns that are bimodal in our typical experience.
And then we generate a new kind of thing that falls into neither hump of the distribution, and it’s super tempting to ask questions like “But is it really in the first hump, or really in the second hump?” “What if we treat AIs as if they’re in the first hump, but actually they’re really in the second hump?”
Caption: Which hump is X really in?
The solution seems simple to state but very complicated to do: just make moral decisions about AIs without relying on all-or-nothing properties that may not apply.
Man, I’m reacting to an entire genre of thought, not just this post exactly, so apologies for combination unkindness and inaccuracy, but I think it’s barking up the wrong tree to worry about whether AIs will have the Stuff or not. Pain perception, consciousness, moral patiency, these are things that are all-or-nothing-ish for humans, in our everyday experience of the everyday world. But there is no Stuff underlying them, such that things either have the Stuff or don’t have the Stuff—no Platonic-realm enforcement of this all-or-nothing-ish-ness. They’re just patterns that are bimodal in our typical experience.
And then we generate a new kind of thing that falls into neither hump of the distribution, and it’s super tempting to ask questions like “But is it really in the first hump, or really in the second hump?” “What if we treat AIs as if they’re in the first hump, but actually they’re really in the second hump?”
Caption: Which hump is X really in?
The solution seems simple to state but very complicated to do: just make moral decisions about AIs without relying on all-or-nothing properties that may not apply.