Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he’s not particularly well known for being a liar.
I think the thing here is to view Bob’s words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?
Let’s start with how we think about such a question. I think the typical Bayesian approach is pretty practical here. Ask yourself how likely Bob would say “good” when he is doing good. Ask yourself how likely he would say it when he isn’t.
I think most people tend to say “good” if their hedonic state is something like between 10th percentile and 90th. If it’s 5-10th percentile my model says people will usually say something like what Alice said: “not doing so well”. If it’s 0-5th maybe they’ll say “I’m actually really struggling”. And similarly for 90+ percentile. It depends though. But with this model, I think we can take Bob’s claim as some sort of solid evidence that he is, uh, doing fine, and perhaps weak evidence that he is leaning towards actually feeling good. But now looking at Alice, according to my model, it’s actually pretty strong evidence that she is not doing well.
Maybe all of this seems obvious to you. If so, good. But why would I write something if it’s so obvious? Idk. I just have been finding myself tempted to interpret words literally instead of thinking about how strong they are as Bayesian evidence, and I think that other rationalists/people do this quite often as well.
PS: This is hinted at quite often in HPMoR. Perhaps other rationalist-fic as well. Ie. an exchange like:
Quirrell: [Asks Harry a question]
Harry: [Pauses momentarily]
Quirrell: I see.
Harry: Damn! I basically just told him X by pausing because pausing is strong Bayesian evidence of X.
PPS: This is really just a brain dump. I’d love to see someone write this up better than I did here.
I notice I’m confused. I don’t actually know what it would mean (what predictions I’d make or how I’d find out if I were correct about) for Bob to be “doing good”. I don’t think it generally means “instantaneous hedonic state relative to some un-tracked distribution”, I think it generally means “there’s nothing I want to draw your attention to”. And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
I’m agreed that thinking of it as a Bayesean update is often a useful framing. However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
I think it generally means “there’s nothing I want to draw your attention to”.
Agreed.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
Agreed.
And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
I think the big thing I disagree on is that this is always obvious. Thought of in the abstract like this I guess I agree that it is obvious. However, I think that there are times when you are in the moment where it can be hard to not interpret words literally, and that is what inspired me to write this. Although now I am realizing that I failed to make that clear or provide any examples of that. I’d like to provide some good examples now, but it is weirdly difficult to do so.
However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
Agreed. I didn’t mean to imply otherwise, even though I might have.
Words as Bayesian Evidence
Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he’s not particularly well known for being a liar.
I think the thing here is to view Bob’s words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?
Let’s start with how we think about such a question. I think the typical Bayesian approach is pretty practical here. Ask yourself how likely Bob would say “good” when he is doing good. Ask yourself how likely he would say it when he isn’t.
I think most people tend to say “good” if their hedonic state is something like between 10th percentile and 90th. If it’s 5-10th percentile my model says people will usually say something like what Alice said: “not doing so well”. If it’s 0-5th maybe they’ll say “I’m actually really struggling”. And similarly for 90+ percentile. It depends though. But with this model, I think we can take Bob’s claim as some sort of solid evidence that he is, uh, doing fine, and perhaps weak evidence that he is leaning towards actually feeling good. But now looking at Alice, according to my model, it’s actually pretty strong evidence that she is not doing well.
Maybe all of this seems obvious to you. If so, good. But why would I write something if it’s so obvious? Idk. I just have been finding myself tempted to interpret words literally instead of thinking about how strong they are as Bayesian evidence, and I think that other rationalists/people do this quite often as well.
PS: This is hinted at quite often in HPMoR. Perhaps other rationalist-fic as well. Ie. an exchange like:
PPS: This is really just a brain dump. I’d love to see someone write this up better than I did here.
I notice I’m confused. I don’t actually know what it would mean (what predictions I’d make or how I’d find out if I were correct about) for Bob to be “doing good”. I don’t think it generally means “instantaneous hedonic state relative to some un-tracked distribution”, I think it generally means “there’s nothing I want to draw your attention to”. And I take as completely obvious that the vast majority of social interactions are more contextual and indirect than overt legible information-sharing.
This combines to make me believe that it’s just an epistemic mistake to take words literally most of the time, at least without a fair bit of prior agreement and contextual sharing about what those words mean in that instance.
I’m agreed that thinking of it as a Bayesean update is often a useful framing. However, the words are a small part of evidence available to you, and since you’re human, you’ll almost always have to use heuristics and shortcuts rather than actually knowing your priors, the information, or the posterior beliefs.
It sounds like we mostly agree.
Agreed.
Agreed.
I think the big thing I disagree on is that this is always obvious. Thought of in the abstract like this I guess I agree that it is obvious. However, I think that there are times when you are in the moment where it can be hard to not interpret words literally, and that is what inspired me to write this. Although now I am realizing that I failed to make that clear or provide any examples of that. I’d like to provide some good examples now, but it is weirdly difficult to do so.
Agreed. I didn’t mean to imply otherwise, even though I might have.