So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.
Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).
I would think that an ideal rationalist’s mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.
In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.
I would think that an ideal rationalist’s mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.
Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there’s a curve, I’ll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).
My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I’d checked the grade is 1.7 + 0.2 = 1.9.
If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I’m risk-neutral).
Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.
Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.
In this case, the fact that we know the actual grade stands to be misleading, since it’s liable to make any probability distribution that doesn’t provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.
I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as “enjoying the party,” 1 or 0. I could do lots of other tweaks.
But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I’m not about to fail an easy exam in a couple of days just to see what my utility function would do.
Conservation of expected evidence means that ideally, you can’t expect the introduction of new evidence to affect your expected utility. In practice, that’s probably not the case, but humans aren’t even rough approximations of ideal rationalists.
In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.
The Dunning-Kruger effect suggests that people on average will be too optimistic about grades.
Depending on their degree of competence. People who are actually competent tend to underestimate themselves. Perhaps I’ve simply developed an unrepresentative impression by associating more with people who are generally competent.
I would think that an ideal rationalist’s mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.
In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.
Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there’s a curve, I’ll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).
My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I’d checked the grade is 1.7 + 0.2 = 1.9.
If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I’m risk-neutral).
Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.
Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.
In this case, the fact that we know the actual grade stands to be misleading, since it’s liable to make any probability distribution that doesn’t provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.
I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as “enjoying the party,” 1 or 0. I could do lots of other tweaks.
But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I’m not about to fail an easy exam in a couple of days just to see what my utility function would do.
Conservation of expected evidence means that ideally, you can’t expect the introduction of new evidence to affect your expected utility. In practice, that’s probably not the case, but humans aren’t even rough approximations of ideal rationalists.
The Dunning-Kruger effect suggests that people on average will be too optimistic about grades.
Depending on their degree of competence. People who are actually competent tend to underestimate themselves. Perhaps I’ve simply developed an unrepresentative impression by associating more with people who are generally competent.