My disagreement with this post is that I am a human-centric carbon[1] chauvinist. You write:
I’m saying something more like: we humans have selfish desires (like for vanilla ice cream), and we also have broad inclusive desires (like for everyone to have ice cream that they enjoy, and for alien minds to feel alien satisfaction at the fulfilment of their alien desires too). And it’s important to get the AI on board with those values.
Why would my “selfish” desires be any less[2] important than my “broad inclusive” desires? Assuming even that it makes sense to separate the two, which is unclear. I don’t see any decision-theoretic justification for this. (This is not to say that AI should like ice cream, but that it should provide me with ice cream, or some replacement that I would consider better.)
I think if the AI kills everyone and replaces us with vaguely-human-like minds that we would consider “sentient”, that will go on to colonize the universe and have lots of something-recognizable-as “fun and love and beauty and wonder”, it would certainly be better than bleak-desolation-squiggles, but it would also certainly be a lot worse than preserving everyone alive today and giving us our own utopic lives.
I probably don’t care much about actual carbon. If I was replaced with a perfect computer simulation of me, it would probably be fine. But I’m not sure about even that much.
My disagreement with this post is that I am a human-centric carbon[1] chauvinist. You write:
Why would my “selfish” desires be any less[2] important than my “broad inclusive” desires? Assuming even that it makes sense to separate the two, which is unclear. I don’t see any decision-theoretic justification for this. (This is not to say that AI should like ice cream, but that it should provide me with ice cream, or some replacement that I would consider better.)
I think if the AI kills everyone and replaces us with vaguely-human-like minds that we would consider “sentient”, that will go on to colonize the universe and have lots of something-recognizable-as “fun and love and beauty and wonder”, it would certainly be better than bleak-desolation-squiggles, but it would also certainly be a lot worse than preserving everyone alive today and giving us our own utopic lives.
I probably don’t care much about actual carbon. If I was replaced with a perfect computer simulation of me, it would probably be fine. But I’m not sure about even that much.
“Less” relatively to the weights they already have in my utility function.