What literature is available on who will be given moral consideration in a superintelligence’s coherent extrapolated volition (CEV) and how much weight each agent will be given?
Nick Bostrom’s Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don’t mention this topic, so I am doubtful on the usefulness of reading their entireties.
What literature is available on who will be given moral consideration in a superintelligence’s coherent extrapolated volition (CEV), and how much weight each agent will be given?
I don’t think anyone has a satisfactory solution to what is inherently a political question, and I think people correctly anticipate that analyzing it through the lens of politics will lead to unsatisfying discussions.
Thinking of the prisoners-dilemma-with-access-to-sourcecode, an obvious strategy would be to allocate negentropy to agents that would employ the same strategy in proportion to the probability that they would have ended up in the position to allocate the universe’s negentropy.
Presumably “employ the same strategy” should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy.
What literature is available on who will be given moral consideration in a superintelligence’s coherent extrapolated volition (CEV) and how much weight each agent will be given?
Nick Bostrom’s Superintelligence mentions that it is an open problem as to whether AIs, non-human animals, currently deceased people, etc should be given moral consideration, and whether the values of those who aid in creating the superintelligence should be given more weight than that of others. However, Bostrom does not actually answer these questions, other than slightly advocating everyone being given equal weight in the CEV. The abstracts of other papers on CEV don’t mention this topic, so I am doubtful on the usefulness of reading their entireties.
(This is a repost.)
I don’t think anyone has a satisfactory solution to what is inherently a political question, and I think people correctly anticipate that analyzing it through the lens of politics will lead to unsatisfying discussions.
Thinking of the prisoners-dilemma-with-access-to-sourcecode, an obvious strategy would be to allocate negentropy to agents that would employ the same strategy in proportion to the probability that they would have ended up in the position to allocate the universe’s negentropy.
Presumably “employ the same strategy” should be interpreted loosely, as it seems problematic to give no consideration to agents who would use a slightly different allocation strategy.
Thanks for the idea. I will look into it.