Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can’t trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences.
Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I’ll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.
What if you were to assume I am philosophical zombie?
That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.
assignation of zero moral value to AI’s experiences.
This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can’t be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too.
AI values will be created by us, just like our values were created by nature. We don’t suffer because we don’t have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that’s another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don’t feel harmed by the lack of this value, I don’t suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences.
That’s not so much an assumption as an initial action plan. Many of the denizens here don’ t want to build artificial people initially. They do want an artificial moral agent—but not one whose experiences are regarded as being intrinsicallly valuable—at least not straight away.
Of course you could build agents with valued experiences—the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually—if it was agreed that doing so was a good idea.
If you look at something like the iRobot movie, those robots were’t valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.
assignation of zero moral value to AI’s experiences.
Who said they did this and where? Assuming that’s what they meant to say, I would like to go chew them out. More likely you and they got hit by illusion of transparency.
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can’t trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .
Not something done here. If someone else is interested they can find the places this has been discussed previously (or you could do some background research yourself.) For my part I’ll just explicitly deny that this represents any sort of consensus lesswrong position, lest the casual reader be mislead.
That would be troubling indeed. It would mean I have become a rather confused and incompetent philosopher.
Is this what you had in mind?
It’s a good start, thankyou!
This seems like you are talking about some existing AI that already has a mechanism for having and evaluating its experiences. But this is not the case. We are discussing how to build an AI, and it seems like good idea to make an AI without experiences (if such words make sense), so it can’t be hurt by doing what we value. And if this were not possible, I assume we would try to make an AI that has goals compatible with us, so what makes us happy, makes AI happy too.
AI values will be created by us, just like our values were created by nature. We don’t suffer because we don’t have a different set of values. (Actually, we do suffer because we have conflicting values, so we are often not able to satisfy all of them, but that’s another topic.) For example I would feel bad about a future without any form of art. But I would not feel bad about a future without any form of paperclips. Clippy would be probably horrified, and assuming a sympathetic view, ze would feel sorry that the blind gods of evolution have crippled me by denying me the ability to value paperclips. However, I don’t feel harmed by the lack of this value, I don’t suffer, I am perfectly OK with this situation. So by analogy, if we manage to create the AI with the right set of values, it will be perfectly OK with that situation too.
That’s not so much an assumption as an initial action plan. Many of the denizens here don’ t want to build artificial people initially. They do want an artificial moral agent—but not one whose experiences are regarded as being intrinsicallly valuable—at least not straight away.
Of course you could build agents with valued experiences—the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually—if it was agreed that doing so was a good idea.
If you look at something like the iRobot movie, those robots were’t valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.
Who said they did this and where? Assuming that’s what they meant to say, I would like to go chew them out. More likely you and they got hit by illusion of transparency.