Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that’s all that we need to do.
1) What decision-making procedure do you use to replace intuition with another decision-making procedure?
2) What decision-making procedure is used to come up with numerical utility assignments and what evidence do you have that it is correct by a certain probability?
Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.
3) What method is used to convert those rough estimates provided by our intuition into numeric estimates?
3b) What evidence do you have converting intuitive judgements of the utility of world states into numeric estimates increases the probability of attaining what you really want?
What? You don’t “assign a utility to get the desired result”, you try to figure out what the desired result is.
An example would be FAI research. There is virtually no information to judge the expected utility of it. If you are in favor of it you can cite the positive utility associated with a galactic civilizations, if you are against it you can cite the negative utility associated with getting it wrong or making UFAI more likely by solving decision theory.
The desired outcome is found by calculating how much it satisfies your utility-function, e.g. how much utils you assign to an hour of awesome sex and how much negative utility you assign to an hour of horrible torture.
Humans do not have stable utility functions and can simply change the weighting of various factors and thereby the action that maximizes expected utility.
What evidence do you have that the whole business of expected utility maximization isn’t just a perfect tool to rationalize biases?
(Note that I am not talking about the technically ideal case of a perfectly rational (whatever that means in this context) computationally unbounded agent.)
Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.
Sure, but if attaining an event is dangerous because the crime rate in that area is very high due to recent riots, what prevents you from adjusting your utility function to attain anyway? In other words, what difference is there between just doing what you want based on naive introspection versus using expected utility calculations? If utility is completely subjective and arbitrary then it won’t help you to evaluate different actions objectively. Winning is then just a label you can assign to any world state you like best at any given moment.
What would be irrational about playing the lottery all the day as long as I assign huge amounts of utility to money won by means of playing the lottery and therefore world states where I am rich by means of playing the lottery?
1) What decision-making procedure do you use to replace intuition with another decision-making procedure?
2) What decision-making procedure is used to come up with numerical utility assignments and what evidence do you have that it is correct by a certain probability?
3) What method is used to convert those rough estimates provided by our intuition into numeric estimates?
3b) What evidence do you have converting intuitive judgements of the utility of world states into numeric estimates increases the probability of attaining what you really want?
An example would be FAI research. There is virtually no information to judge the expected utility of it. If you are in favor of it you can cite the positive utility associated with a galactic civilizations, if you are against it you can cite the negative utility associated with getting it wrong or making UFAI more likely by solving decision theory.
The desired outcome is found by calculating how much it satisfies your utility-function, e.g. how much utils you assign to an hour of awesome sex and how much negative utility you assign to an hour of horrible torture.
Humans do not have stable utility functions and can simply change the weighting of various factors and thereby the action that maximizes expected utility.
What evidence do you have that the whole business of expected utility maximization isn’t just a perfect tool to rationalize biases?
(Note that I am not talking about the technically ideal case of a perfectly rational (whatever that means in this context) computationally unbounded agent.)
Sure, but if attaining an event is dangerous because the crime rate in that area is very high due to recent riots, what prevents you from adjusting your utility function to attain anyway? In other words, what difference is there between just doing what you want based on naive introspection versus using expected utility calculations? If utility is completely subjective and arbitrary then it won’t help you to evaluate different actions objectively. Winning is then just a label you can assign to any world state you like best at any given moment.
What would be irrational about playing the lottery all the day as long as I assign huge amounts of utility to money won by means of playing the lottery and therefore world states where I am rich by means of playing the lottery?