Ah! Sorry for the mixed-up identities. Likewise, I didn’t come up with that “51% chance to lose $5, 49% chance to win $10000” example.
But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn’t work on its own, though it still looks like we don’t know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.
Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won’t make the wrong decision on that particular example by following this algorithm. I need more than that to be convinced!
For instance, it looks like it’ll make the wrong decision on questions like “I can choose to 1) die here quietly, or 2) go get help, which has a 1⁄3 chance of saving my life but will be a little uncomfortable.” The utility of surviving presumably swamps the rest of the utility function, right?
Ah, it appears that I’m mixing up identities as well. Apologies.
Yes, I retract the “variance greater than 5”. I think it would have to be variance of at least 10,000 for this method to work properly. I do suspect that this method is similar to decision-making processes real humans use (optimizing the median outcome of their lives), but when you have one or two very important decisions instead of many routine decisions, methods that work for many small decisions don’t work so well.
If, instead of optimizing for the median outcome, you optimized for the average of outcomes within 3 standard deviations of the median, I suspect you would come up with a decision outcome quite close to what people actually use (ignoring very small chances of very high risk or reward).
Ah! Sorry for the mixed-up identities. Likewise, I didn’t come up with that “51% chance to lose $5, 49% chance to win $10000” example.
But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn’t work on its own, though it still looks like we don’t know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.
Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won’t make the wrong decision on that particular example by following this algorithm. I need more than that to be convinced!
For instance, it looks like it’ll make the wrong decision on questions like “I can choose to 1) die here quietly, or 2) go get help, which has a 1⁄3 chance of saving my life but will be a little uncomfortable.” The utility of surviving presumably swamps the rest of the utility function, right?
Ah, it appears that I’m mixing up identities as well. Apologies.
Yes, I retract the “variance greater than 5”. I think it would have to be variance of at least 10,000 for this method to work properly. I do suspect that this method is similar to decision-making processes real humans use (optimizing the median outcome of their lives), but when you have one or two very important decisions instead of many routine decisions, methods that work for many small decisions don’t work so well.
If, instead of optimizing for the median outcome, you optimized for the average of outcomes within 3 standard deviations of the median, I suspect you would come up with a decision outcome quite close to what people actually use (ignoring very small chances of very high risk or reward).
This all seems very sensible and plausible!