I assume what Will_Pearson meant to say was “would not regret making this wish”, which fits with the specification of “I is the entity standing here right now”. Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it’s supposed the find out of that set of possibly worlds the one you would most like, or… something along those lines))
I’m not sure that would rule out every bad outcome, but… I think it probably would. Besides the obvious “other humans have different preferences from the guy building the AI”- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?
I assume what Will_Pearson meant to say was “would not regret making this wish”, which fits with the specification of “I is the entity standing here right now”. Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it’s supposed the find out of that set of possibly worlds the one you would most like, or… something along those lines)) I’m not sure that would rule out every bad outcome, but… I think it probably would. Besides the obvious “other humans have different preferences from the guy building the AI”- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?