We could call the non-nosy hypotheses “nice neighbors”.
Seems like a bad name: “nice neighbors” don’t care if everyone ‘around’ them is being tortured.
I’ve framed things in this post in terms of value uncertainty, but I believe everything can be re-framed in terms of uncertainty about what the correct prior is (which connects better with the motivation in my previous post on the subject).
Wait, do you think value uncertainty is equivalent/reducible to uncertainty about the correct prior? Would that mean the correct prior to use depends on your values?
One issue with Geometric UDT is that it doesn’t do very well in the presence of some utility hypotheses which are exactly or approximately negative of others: even if there is a Pareto-improvement, the presence of such enemies prevents us from maximizing the product of gains-from-trade, so Geometric UDT is indifferent between such improvements and the BATNA. This can probably be improved upon.
So one conflicting pair spoils the whole thing, i.e. ignoring the pair is a pareto improvement?
Wait, do you think value uncertainty is equivalent/reducible to uncertainty about the correct prior?
Yep. Value uncertainty is reduced to uncertainty about the correct prior via the device of putting the correct values into the world as propositions.
Would that mean the correct prior to use depends on your values?
If we construe “values” as preferences, this is already clear in standard decision theory; preferences depend on both probabilities and utilities. UDT further blurs the line, because in the context of UDT, probabilities feel more like a “caring measure” expressing how much the agent cares about how things go in particular branches of possibility.
So one conflicting pair spoils the whole thing, i.e. ignoring the pair is a pareto improvement?
Unless I’ve made an error? If the Pareto improvement doesn’t impact the pair, then gains-from-trade for both in the pair is zero, making the product of gains-from-trade zero. But the Pareto improvement can’t impact the pair, since an improvement for one would be a detriment to the other.
Oh cool!
Seems like a bad name: “nice neighbors” don’t care if everyone ‘around’ them is being tortured.
Wait, do you think value uncertainty is equivalent/reducible to uncertainty about the correct prior? Would that mean the correct prior to use depends on your values?
So one conflicting pair spoils the whole thing, i.e. ignoring the pair is a pareto improvement?
Yep. Value uncertainty is reduced to uncertainty about the correct prior via the device of putting the correct values into the world as propositions.
If we construe “values” as preferences, this is already clear in standard decision theory; preferences depend on both probabilities and utilities. UDT further blurs the line, because in the context of UDT, probabilities feel more like a “caring measure” expressing how much the agent cares about how things go in particular branches of possibility.
Unless I’ve made an error? If the Pareto improvement doesn’t impact the pair, then gains-from-trade for both in the pair is zero, making the product of gains-from-trade zero. But the Pareto improvement can’t impact the pair, since an improvement for one would be a detriment to the other.