The “should” here is not defined clearly enough (or at all!), even though this seems to be the central point in the debate. We have the intuition that the question is meaningful, but I suspect that it really isn’t. I don’t understand what this could possibly mean—expect for trivial cases where you already specify a goal. I would leave it at “Most intelligent beings in the multiverse share similar preferences”, with perhaps adding a qualifier like “evolved/intelligently designed”. Note that this would then be answering a slightly different question than 3., 4. and 5.
My own view is roughly a 4.3 on the spectrum from 4. to 5.
The way “complexity of value” is used by Eliezer seems to suggest that he adheres to view 3, although I could well imagine him also going for 4 or 5.
I’m unsure about 6; I suspect/hope that you can just define “winning” clearly enough in whatever utility function you’re interested in and decision theory will sort itself out. But maybe it’s more complicated.
The “should” here is not defined clearly enough (or at all!), even though this seems to be the central point in the debate. We have the intuition that the question is meaningful, but I suspect that it really isn’t. I don’t understand what this could possibly mean—expect for trivial cases where you already specify a goal. I would leave it at “Most intelligent beings in the multiverse share similar preferences”, with perhaps adding a qualifier like “evolved/intelligently designed”. Note that this would then be answering a slightly different question than 3., 4. and 5.
My own view is roughly a 4.3 on the spectrum from 4. to 5.
The way “complexity of value” is used by Eliezer seems to suggest that he adheres to view 3, although I could well imagine him also going for 4 or 5.
I’m unsure about 6; I suspect/hope that you can just define “winning” clearly enough in whatever utility function you’re interested in and decision theory will sort itself out. But maybe it’s more complicated.