So, the problem with that line of reasoning is that it would work if hypocrisy were sometimes a co-occurrence of a bad thing, but sometimes of a good or neutral thing. But it does not seem that way to me—not to any degree that matters, anyway. I do not take seriously the “akrasia” argument.
Let’s consider a scenario or two:
Scenario 1a
A: Everyone ought to do X.
B: Do you do X?
A: Oh, no, I don’t do X, but really I should. Akrasia, you know.
Scenario 1b
A: Everyone ought to do X. I don’t do X myself, but I really ought to. I’m trying, but failing. Akrasia, you know.
Scenarios 1a and 1b are slightly different. In scenario 1a, A could’ve gotten away with advocating X without his hypocrisy being revealed. That is strictly more blameworthy than scenario 1b, where A admits the disconnect between his words and his actions, but insists that it’s a failure of willpower (or whatever it is that “akrasia” in fact maps to).
Notice what is happening: A is introducing (or seeking to introduce) a new norm of behavior. Should this norm be accepted, conforming to the norm will be socially rewarded, and deviation from the norm will be socially punished. Of course, conforming to the norm is costly (which is a large part of why it’s socially rewarded).
Now, suppose the norm is accepted. Should A be socially punished for deviating from it? If not, why not? Well, in practice, what often happens in such a case is that A might be socially punished a little, but not a lot. You see, A believes that this norm should exist, and he advocates the norm, and he even admits that he is flawed in his deviation from it—these are praiseworthy behaviors, aren’t they?
But in this case A has gotten something for nothing. Talk is cheap; what does it cost A to speak as he does? And he gets praise for it! Everyone else, of course, must choose between conforming to the norm (which is costly in resources) and deviation (which is costly in social status). Unless, of course, they also advocate the norm, while admitting their own akrasia…
This is a bad outcome. It most rewards people whose words do not match their actions, and punishes those who honestly try to conform to all endorsed norms, and to advocate only that which they themselves do; it punishes integrity.
Of course, it is likely that in some cases, a person might believe that something is genuinely a good idea, which thing they genuinely would like to do themselves, and are trying, but failing, to do (for any of the reasons we might be tempted to fold into the umbrella of “akrasia”). It would be foolish to deny that this ever happens. Should they, then, be socially punished for advocating that thing?
Yes, of course they should. Not harshly, mind you! Just a little. Enough to serve as a small but noticeable cost; enough to discourage doing this often, doing this all the time, for many things; enough to prevent anyone from gaining a great deal of social approval costlessly; enough to ensure that people advocate—of those things which they themselves fail to do—only those few which they really believe in—enough to take the status hit for the minor hypocrisy. This is the good outcome.
That is one sort of hypocrisy scenario. Of course there are many other kinds. Similar logic applies, I think, to almost all, if not indeed all, of them.
The thing to understand is that the exact way in which hostile-agenthood translates into relative benefit and relative harm, is often very subtle; and it is often very difficult to sort out directly. Furthermore, it is often not only difficult, but socially unacceptable, to question, to attempt to discover, to take certain actions aimed at ascertaining, whether any such thing is taking place. The hypocrisy norm sidesteps this. It is a generalized defense. It is, computationally (so to speak) as well as socially, tremendously cheaper than the alternative. It is also flexible: it can be varied in strength, in accordance with the degree of judged hypocrisy (it does come in degrees, you know).
(A parallel may be drawn with “appearance of impropriety” norms—which, I have noticed, seem also to be in disfavor in rationalist communities; and more’s the pity. Rejecting such norms drastically lowers resistance to certain classes of exploits—exploits which, indeed, seem to be more common in “our sorts” of communities than elsewhere.)
To what extent do you think there is still a disagreement between us, if I’m in agreement about the rule
8. “If it’s a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite.”
I know we have pending points unrelated to that (IE, the status of #6), but it seems like bringing out the distinction of #8 may change the conversation. Certainly I was ignoring that distinction before. So, does your position on the disagreement about #6 change, with that in mind?
If not, my response to the scenarios which you mention above is that (unless I’m mistaken) they fall under #8, so it seems like I don’t need anything like #6 to get them right.
The problem with your #8 is that it’s too specific. What you seem to be doing here is taking a fairly general analytical framework, extracting two specific conclusions from it, and then replacing the framework with the conclusions. This is, of course, problematic for several reasons:
The conclusions in question won’t always hold. Note that inserting qualifiers like “fairly strongly” (and otherwise making explicit the idea that the conclusions are not an in-all-cases thing) doesn’t fix the problem, because without the framework, you don’t have a way of re-generating the conclusions, nor of determing whether they hold in any given case.
There could be (indeed, are likely to be) other conclusions which one may draw from said analytical framework, beyond the ones you’ve enumerated. (Turning an algorithm or heuristic into a lookup table is always problematic for this reason—how sure are you that you’ve enumerated all the input-output pairings?)
Because the analytic framework is itself only a heuristic (as we have discussed elsethread), it’s dangerous to elevate any particular conclusions it generates to the status of independent rules (or even heuristics); it obscures the heuristic nature of the generating framework. In this case, the specific problem is that #6 is highly amenable to having its output affected by other things that we know about the agent in question (i.e., the alleged hypocrite), in various fairly straightforward ways; whereas with your #8, it’s not really clear how to apply case-specific knowledge to modify the given conclusions (and so, if we do so at all, we’re likely to do it in an ad-hoc and imprecise manner—some sort of crude “social status override”, perhaps).
Of course, your #8 is certainly a good distillation of a particular sort of quite common hypocrisy-related issue. But beware of attempting to replace the generalized anti-hypocrisy norm with it, for the reasons I’ve given.
One thing that I’d like to mention here, that may help clarify some of our disagreement, is the following (which, perhaps, would better fit a different subthread of this conversation, but I’m not quite up to the task of finding the perfect place for it, at the moment)…
You’ve mentioned “high-trust spaces” (or similar language) several times now, and my response has been, essentially, that (to a first approximation), such things do not exist. Let me expand on that a bit.
If you define a “high-trust space” as a social context which does not require an anti-hypocrisy norm in order to function well—i.e., have mostly honest, mostly cooperative, mostly effective, positive-sum interactions between its members—then, indeed, I maintain that such things don’t exist (excluding close-knit family/friends groups, as I’ve noted).
However, what I think absolutely does exist is “high-trust spaces” in a different sense: social contexts which function well (in the sense just given) because they have a strong anti-hypocrisy norm (plus other reasons, of course).
Given this, I view the generalized anti-hypocrisy norm as a sort of “locks keep honest people honest” mechanism. A high-trust space remains high-trust by virtue of such mechanisms, which allow it to attract and retain people of high integrity, and repel and expel people of low integrity. Thus, observing that a social context exhibits high trust, and deciding that therefore no anti-hypocrisy norm is needed there, is a drastic misunderstanding of the direction of causation—and is likely to have unfortunate consequences for that social context, going forward.
What you seem to be doing here is taking a fairly general analytical framework, extracting two specific conclusions from it, and then replacing the framework with the conclusions.
I agree with your remarks about this general pattern, but the mitigating factor here is that when a powerful heuristic generates conclusions in specific cases which are clearly very wrong, it is useful to refine the framework. That’s what I’m trying to do here. Your objection is that my refinement throws the baby out with the bathwater. Fine—then where’s the baby? I currently see cause for #8, but you see #8 as neglecting a bunch of other useful stuff which comes from the general anti-hypocrisy norm. Can you point to some other useful things which don’t come from #8 alone?
But, perhaps it is premature to have a “where’s the baby?” conversation, because you are still saying “where’s the bathwater?” IE, you don’t see need to throw anything out at all.
Because the analytic framework is itself only a heuristic (as we have discussed elsethread), it’s dangerous to elevate any particular conclusions it generates to the status of independent rules (or even heuristics); it obscures the heuristic nature of the generating framework.
Maybe it’s not very cruxy, but this part didn’t make sense to me. If it’s dangerous to elevate #8 to the status of a heuristic because it might be taken as a rigid rule, isn’t it similarly dangerous to elevate general anti-hypocrisy to the level of heuristic for fear of it becoming rigid? That’s basically my whole schtick here—that the general norm seems to create a lot of specific behaviors which are silly upon closer inspection. Your argument in the above paragraph seems to be begging some kind of assumption which gives the general norm a radically different status than any more specific variations we are discussing. Maybe it does require a radically different status, but, that seems like the subject under debate rather than something to be assumed.
If you define a “high-trust space” as a social context which does not require an anti-hypocrisy norm in order to function well—i.e., have mostly honest, mostly cooperative, mostly effective, positive-sum interactions between its members—then, indeed, I maintain that such things don’t exist (excluding close-knit family/friends groups, as I’ve noted).
However, what I think absolutely does exist is “high-trust spaces” in a different sense: social contexts which function well (in the sense just given) because they have a strong anti-hypocrisy norm (plus other reasons, of course).
Given this, I view the generalized anti-hypocrisy norm as a sort of “locks keep honest people honest” mechanism.
My argument was an either-or, stating why I didn’t see the norm as useful in high-trust or low-trust situations. But I agree that I have to address the case where the norm is useful precisely because its existence prevents the sort of cases where it would be needed.
But, to this I’d currently reply that I don’t see what’s captured by the general norm and not by #8. So the baby/bathwater discussion seems most useful at the moment:
(if we can think of ways forward) re-starting the conversation on #6, which gets at some of the cases where I think anti-hypocrisy yields conclusions that are wrong and harmful. Specifically, I claim that in many cases in my recent experience, people discounted their own advice due to anti-hypocrisy heuristic; I noticed this; I noticed that I myself had updated against their advice; and, none of this really seemed to make any sense in context. Sometimes advice is really straightforward, has obvious and verifiable reasons for being good advice, is easy for the listener to follow, and is hypocritical.
Or, otherwise, can you illustrate positive use-cases which fall outside of #8?
I note that your response is not what I expected—I would have sooner expected you to defend the position that #8 implies the entire norm because all discussions are actually veiled discussions about norms (IE, you can’t really separate advice from status games, good ideas always have “should”-nature, etc).
Also, I want to call out progress which has occurred in this discussion, lest it seem like an interminable argument:
We have done a lot of refinement of what could be meant by anti-hypocrisy norms (or their negation).
I now understand that you don’t mean that we should always call out hypocrisy, which was at one point the main thing I was arguing against.
We agree on how these cases should be handled, if not on the underlying heuristics at play.
I think I might just agree with the status version of “flinching away from hypocrisy”. IE, if it’s a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite.
(I’ll think about it more, and probably put an addendum at the end of the post calling out this and other distinctions which have been raised.)
I see, thanks.
So, the problem with that line of reasoning is that it would work if hypocrisy were sometimes a co-occurrence of a bad thing, but sometimes of a good or neutral thing. But it does not seem that way to me—not to any degree that matters, anyway. I do not take seriously the “akrasia” argument.
Let’s consider a scenario or two:
Scenario 1a
A: Everyone ought to do X.
B: Do you do X?
A: Oh, no, I don’t do X, but really I should. Akrasia, you know.
Scenario 1b
A: Everyone ought to do X. I don’t do X myself, but I really ought to. I’m trying, but failing. Akrasia, you know.
Scenarios 1a and 1b are slightly different. In scenario 1a, A could’ve gotten away with advocating X without his hypocrisy being revealed. That is strictly more blameworthy than scenario 1b, where A admits the disconnect between his words and his actions, but insists that it’s a failure of willpower (or whatever it is that “akrasia” in fact maps to).
Notice what is happening: A is introducing (or seeking to introduce) a new norm of behavior. Should this norm be accepted, conforming to the norm will be socially rewarded, and deviation from the norm will be socially punished. Of course, conforming to the norm is costly (which is a large part of why it’s socially rewarded).
Now, suppose the norm is accepted. Should A be socially punished for deviating from it? If not, why not? Well, in practice, what often happens in such a case is that A might be socially punished a little, but not a lot. You see, A believes that this norm should exist, and he advocates the norm, and he even admits that he is flawed in his deviation from it—these are praiseworthy behaviors, aren’t they?
But in this case A has gotten something for nothing. Talk is cheap; what does it cost A to speak as he does? And he gets praise for it! Everyone else, of course, must choose between conforming to the norm (which is costly in resources) and deviation (which is costly in social status). Unless, of course, they also advocate the norm, while admitting their own akrasia…
This is a bad outcome. It most rewards people whose words do not match their actions, and punishes those who honestly try to conform to all endorsed norms, and to advocate only that which they themselves do; it punishes integrity.
Of course, it is likely that in some cases, a person might believe that something is genuinely a good idea, which thing they genuinely would like to do themselves, and are trying, but failing, to do (for any of the reasons we might be tempted to fold into the umbrella of “akrasia”). It would be foolish to deny that this ever happens. Should they, then, be socially punished for advocating that thing?
Yes, of course they should. Not harshly, mind you! Just a little. Enough to serve as a small but noticeable cost; enough to discourage doing this often, doing this all the time, for many things; enough to prevent anyone from gaining a great deal of social approval costlessly; enough to ensure that people advocate—of those things which they themselves fail to do—only those few which they really believe in—enough to take the status hit for the minor hypocrisy. This is the good outcome.
That is one sort of hypocrisy scenario. Of course there are many other kinds. Similar logic applies, I think, to almost all, if not indeed all, of them.
The thing to understand is that the exact way in which hostile-agenthood translates into relative benefit and relative harm, is often very subtle; and it is often very difficult to sort out directly. Furthermore, it is often not only difficult, but socially unacceptable, to question, to attempt to discover, to take certain actions aimed at ascertaining, whether any such thing is taking place. The hypocrisy norm sidesteps this. It is a generalized defense. It is, computationally (so to speak) as well as socially, tremendously cheaper than the alternative. It is also flexible: it can be varied in strength, in accordance with the degree of judged hypocrisy (it does come in degrees, you know).
(A parallel may be drawn with “appearance of impropriety” norms—which, I have noticed, seem also to be in disfavor in rationalist communities; and more’s the pity. Rejecting such norms drastically lowers resistance to certain classes of exploits—exploits which, indeed, seem to be more common in “our sorts” of communities than elsewhere.)
To what extent do you think there is still a disagreement between us, if I’m in agreement about the rule
8. “If it’s a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite.”
I know we have pending points unrelated to that (IE, the status of #6), but it seems like bringing out the distinction of #8 may change the conversation. Certainly I was ignoring that distinction before. So, does your position on the disagreement about #6 change, with that in mind?
If not, my response to the scenarios which you mention above is that (unless I’m mistaken) they fall under #8, so it seems like I don’t need anything like #6 to get them right.
The problem with your #8 is that it’s too specific. What you seem to be doing here is taking a fairly general analytical framework, extracting two specific conclusions from it, and then replacing the framework with the conclusions. This is, of course, problematic for several reasons:
The conclusions in question won’t always hold. Note that inserting qualifiers like “fairly strongly” (and otherwise making explicit the idea that the conclusions are not an in-all-cases thing) doesn’t fix the problem, because without the framework, you don’t have a way of re-generating the conclusions, nor of determing whether they hold in any given case.
There could be (indeed, are likely to be) other conclusions which one may draw from said analytical framework, beyond the ones you’ve enumerated. (Turning an algorithm or heuristic into a lookup table is always problematic for this reason—how sure are you that you’ve enumerated all the input-output pairings?)
Because the analytic framework is itself only a heuristic (as we have discussed elsethread), it’s dangerous to elevate any particular conclusions it generates to the status of independent rules (or even heuristics); it obscures the heuristic nature of the generating framework. In this case, the specific problem is that #6 is highly amenable to having its output affected by other things that we know about the agent in question (i.e., the alleged hypocrite), in various fairly straightforward ways; whereas with your #8, it’s not really clear how to apply case-specific knowledge to modify the given conclusions (and so, if we do so at all, we’re likely to do it in an ad-hoc and imprecise manner—some sort of crude “social status override”, perhaps).
Of course, your #8 is certainly a good distillation of a particular sort of quite common hypocrisy-related issue. But beware of attempting to replace the generalized anti-hypocrisy norm with it, for the reasons I’ve given.
One thing that I’d like to mention here, that may help clarify some of our disagreement, is the following (which, perhaps, would better fit a different subthread of this conversation, but I’m not quite up to the task of finding the perfect place for it, at the moment)…
You’ve mentioned “high-trust spaces” (or similar language) several times now, and my response has been, essentially, that (to a first approximation), such things do not exist. Let me expand on that a bit.
If you define a “high-trust space” as a social context which does not require an anti-hypocrisy norm in order to function well—i.e., have mostly honest, mostly cooperative, mostly effective, positive-sum interactions between its members—then, indeed, I maintain that such things don’t exist (excluding close-knit family/friends groups, as I’ve noted).
However, what I think absolutely does exist is “high-trust spaces” in a different sense: social contexts which function well (in the sense just given) because they have a strong anti-hypocrisy norm (plus other reasons, of course).
Given this, I view the generalized anti-hypocrisy norm as a sort of “locks keep honest people honest” mechanism. A high-trust space remains high-trust by virtue of such mechanisms, which allow it to attract and retain people of high integrity, and repel and expel people of low integrity. Thus, observing that a social context exhibits high trust, and deciding that therefore no anti-hypocrisy norm is needed there, is a drastic misunderstanding of the direction of causation—and is likely to have unfortunate consequences for that social context, going forward.
I agree with your remarks about this general pattern, but the mitigating factor here is that when a powerful heuristic generates conclusions in specific cases which are clearly very wrong, it is useful to refine the framework. That’s what I’m trying to do here. Your objection is that my refinement throws the baby out with the bathwater. Fine—then where’s the baby? I currently see cause for #8, but you see #8 as neglecting a bunch of other useful stuff which comes from the general anti-hypocrisy norm. Can you point to some other useful things which don’t come from #8 alone?
But, perhaps it is premature to have a “where’s the baby?” conversation, because you are still saying “where’s the bathwater?” IE, you don’t see need to throw anything out at all.
Maybe it’s not very cruxy, but this part didn’t make sense to me. If it’s dangerous to elevate #8 to the status of a heuristic because it might be taken as a rigid rule, isn’t it similarly dangerous to elevate general anti-hypocrisy to the level of heuristic for fear of it becoming rigid? That’s basically my whole schtick here—that the general norm seems to create a lot of specific behaviors which are silly upon closer inspection. Your argument in the above paragraph seems to be begging some kind of assumption which gives the general norm a radically different status than any more specific variations we are discussing. Maybe it does require a radically different status, but, that seems like the subject under debate rather than something to be assumed.
My argument was an either-or, stating why I didn’t see the norm as useful in high-trust or low-trust situations. But I agree that I have to address the case where the norm is useful precisely because its existence prevents the sort of cases where it would be needed.
But, to this I’d currently reply that I don’t see what’s captured by the general norm and not by #8. So the baby/bathwater discussion seems most useful at the moment:
(if we can think of ways forward) re-starting the conversation on #6, which gets at some of the cases where I think anti-hypocrisy yields conclusions that are wrong and harmful. Specifically, I claim that in many cases in my recent experience, people discounted their own advice due to anti-hypocrisy heuristic; I noticed this; I noticed that I myself had updated against their advice; and, none of this really seemed to make any sense in context. Sometimes advice is really straightforward, has obvious and verifiable reasons for being good advice, is easy for the listener to follow, and is hypocritical.
Or, otherwise, can you illustrate positive use-cases which fall outside of #8?
I note that your response is not what I expected—I would have sooner expected you to defend the position that #8 implies the entire norm because all discussions are actually veiled discussions about norms (IE, you can’t really separate advice from status games, good ideas always have “should”-nature, etc).
Also, I want to call out progress which has occurred in this discussion, lest it seem like an interminable argument:
We have done a lot of refinement of what could be meant by anti-hypocrisy norms (or their negation).
I now understand that you don’t mean that we should always call out hypocrisy, which was at one point the main thing I was arguing against.
We agree on how these cases should be handled, if not on the underlying heuristics at play.
I think I might just agree with the status version of “flinching away from hypocrisy”. IE, if it’s a conversation about norms, and the hypocrite is making an implicit status claim with their words, noticing the hypocrisy should counter-indicate the norm fairly strongly and also detract from the status of the hypocrite.
(I’ll think about it more, and probably put an addendum at the end of the post calling out this and other distinctions which have been raised.)