I agree that in terms of game theory you’re right, no need to include non-humans as primary sources of values for the CEV. (barring some scenarios where we have powerful AIs that aren’t part of the eventual singleton/swarm implementing the CEV)
But I think the moral question is still worthwhile?
But I think the moral question is still worthwhile?
It’s definitely a very worthwhile question, and also probably a quite difficult one, which is why I would like to bring a superintelligence running CEV to bear on the question.
Less flippantly: I agree the question of how to treat animals and their values and preferences is important, but it does to me seem like the kind of question you can punt on until you are much smarter and in a much better position to answer it. The universe is long and I don’t see a need to rush this question.
No I’m saying it might be too late at that point. The moral question is “who gets to have their CEV implemented?” OP is saying it shouldn’t be only humans, it should be “all beings everywhere”. If we implement an AI on Humanity’s CEV, then the only way that other sapient beings would get primary consideration for their values (not secondary consideration where they’re considered only because Humanity has decided to care about their values) would be if Humanity’s CEV allows other beings to be elevated to primary value sources alongside Humanity. That’s possible I think, but not guaranteed, and EAs concerned with ex. factory farming are well within their rights to be concerned that those animals are not going to be saved any time soon under a Humanity’s CEV-implementing AI.
Now, arguably they don’t have a right as a minority viewpoint to control the value sources for the one CEV the world gets, but obviously from their perspective they want to prevent a moral catastrophe by including animals as primary sources of CEV values from the start.
I… don’t understand? I only care about my own values being included in the CEV. You only care about your own values (and you know, other sources of value correlated with your own) being included in the CEV. Why do I care if we include animals? They are not me. I very likely care about them and will want to help them, but I see absolutely no reason to make that decision right now in a completely irreversible way.
I do not want anyone else to get primary considerations for their values. Ideally it would all be my own! That’s literally what it means to care about something.
I don’t know what you are talking about with “they”. You, just as much as me, just want to have your own values included in the CEV.
I agree that in terms of game theory you’re right, no need to include non-humans as primary sources of values for the CEV. (barring some scenarios where we have powerful AIs that aren’t part of the eventual singleton/swarm implementing the CEV)
But I think the moral question is still worthwhile?
It’s definitely a very worthwhile question, and also probably a quite difficult one, which is why I would like to bring a superintelligence running CEV to bear on the question.
Less flippantly: I agree the question of how to treat animals and their values and preferences is important, but it does to me seem like the kind of question you can punt on until you are much smarter and in a much better position to answer it. The universe is long and I don’t see a need to rush this question.
No I’m saying it might be too late at that point. The moral question is “who gets to have their CEV implemented?” OP is saying it shouldn’t be only humans, it should be “all beings everywhere”. If we implement an AI on Humanity’s CEV, then the only way that other sapient beings would get primary consideration for their values (not secondary consideration where they’re considered only because Humanity has decided to care about their values) would be if Humanity’s CEV allows other beings to be elevated to primary value sources alongside Humanity. That’s possible I think, but not guaranteed, and EAs concerned with ex. factory farming are well within their rights to be concerned that those animals are not going to be saved any time soon under a Humanity’s CEV-implementing AI.
Now, arguably they don’t have a right as a minority viewpoint to control the value sources for the one CEV the world gets, but obviously from their perspective they want to prevent a moral catastrophe by including animals as primary sources of CEV values from the start.
Edit: confusion clarified in comment chain here.
I… don’t understand? I only care about my own values being included in the CEV. You only care about your own values (and you know, other sources of value correlated with your own) being included in the CEV. Why do I care if we include animals? They are not me. I very likely care about them and will want to help them, but I see absolutely no reason to make that decision right now in a completely irreversible way.
I do not want anyone else to get primary considerations for their values. Ideally it would all be my own! That’s literally what it means to care about something.
I don’t know what you are talking about with “they”. You, just as much as me, just want to have your own values included in the CEV.
I seem to have had essentially this exact conversation in a different comment thread on this post with the OP.