Well, the post is claiming something along the lines of “our rules shouldn’t try to do too much, and if they don’t then it isn’t impractical to make them all explicit and leave little to individuals’ judgement”, but …
… if it’s claiming that this is universally true, which seems like a strong and counterintuitive claim, then I don’t think it’s made much of an attempt to make its case …
… but if it’s claiming only that it’s true in some particular set of contexts then your choice of examples is highly relevant, because the examples give some indication of what that particular set of contexts is, and because the plausibility of your argument leans on the plausibility of the examples.
So I think I could buy “this post is a very general and philosophical one so what examples I use doesn’t matter much”, or I could buy “this post gives some reason to accept the claims it makes”, but I don’t think I buy them together.
I’d been assuming we were in the second case, but from what you now say it seems like maybe we’re in the first, so let me explain why I find the arguments unconvincing if they are meant to be so general.
You don’t so much argue for the claim you’re making as counter arguments against it. And, in fact, pretty much the only argument against that you consider goes like this: “The space of all possible behaviors is unthinkably vast. What if the formidable intelligence of an adversary who hates everything our Society stands for, comes up with a behavior that’s really bad but isn’t forbidden by any of Society’s rules?”
This is, to my mind, a weird argument. I have never once heard anyone argue for rules that leave some things to human judgement by appealing to the unthinkable vastness of the space of possibilities as opposed to its thinkable and quite comprehensible, but still pretty large, vastness, or by postulating someone “who hates everything our Society stands for” as opposed to someone who merely has some interests or purposes that don’t fit well with ours.
(So I assumed that you had in mind some particular set of contexts in which, for whatever reason, those who want more flexibility in the rules than you do were concerned about such things. If you’re arguing in full generality, then I say: no, most of the time when people think rules need to allow for individual judgement they aren’t appealing to anything so dramatic.)
You say that those who want more flexibility in the rules than you do think that “we need to empower leaders with the Authority to make judgement calls—even to control the minute details of anyone’s behavior, if that’s what it takes to safeguard Society’s Values”.
Again, this is a weird thing for them to think, and doesn’t seem like a good fit for the position of more than a tiny fraction of people I have encountered.
(So I assumed that you had in mind some particular set of contexts in which either the advocates of More Space For Human Judgement are unusually authoritarian, or the domain within which they might seek to “control the minute details of everyone’s behaviour” is so constrained that seeking to do that doesn’t require one to be unusually authoritarian. If you’re arguing in full generality, then I say: no, this is not what the people you profess to be talking about generally want.)
You say that the rules aren’t there to express society’s values, but only to enable people not to kill one another while they express their values. (I take it that “society” and “kill”, at least, should be interpreted somewhat flexibly.)
Whether that’s what some particular set of rules is for or not will vary. It seems to me more credibly true in cases where the sanctions for breaking the rules can be ruinous (as e.g. when the rules are a nation’s laws) and less credibly true when they are much more minor (as e.g. when they are rules about posting on an internet forum).
(So I assumed that you had in mind some particular set of contexts where it’s, at least, plausible that someone might find it self-evident that the rules-with-sanctions-attached should cover only the essentials. If you’re arguing in full generality, then I say: no, there are tradeoffs governing what aspects of your values if any should be covered by rules. You say that human history since Hammurabi has shown that “it mostly works pretty great” but in approximately zero cases since Hammurabi have the rules in fact had scope as limited as you describe.)
Having made the assumption that the best argument for weakly-formalized rules is the existence of ingenious adversaries opposed to everything that you stand for, you then say that such adversaries “mostly aren’t a real thing”.
That’s true in many contexts, but I’m not sure it’s true in full generality. Suppose “your Society” is a political group, for instance; of course your political opponents don’t oppose literally all your values, but they may oppose all the ones that distinguish you as a group, and that “your Society” is working together to promote.
(So I assumed that you had in mind some particular set of contexts that doesn’t face ingenious genuinely-hostile actors. If you’re arguing in full generality, you can’t assume that, and not having to worry about such people is a load-bearing part of your argument.)
Perhaps your point isn’t that you were making an argument so broad that the details of the examples don’t matter, but only that the examples were so appropriate to the particular cases you had in mind. And, in particular, that the life-and-death-ness of the examples isn’t particularly relevant. -- But it seems quite relevant to me, because a key part of your argument is that rules should be narrowly scoped: in your words, they are “just there to stop ourselves from trying to kill each other when your freedom and dignity is getting in the way of my freedom and dignity”. Now, to be sure, “kill” might be understood in some metaphorical way. But when you say “the rules are just there to stop ourselves from trying to kill each other” and present a bunch of examples in which the rules are literally there to stop people dying, along with one example in which they aren’t and which you explicitly say isn’t an example of what you’re pointing at … well, I don’t think you can be too surprised if someone draws the conclusion that your arguments were aimed at life-and-death cases.
And that life-or-death-ness is, it seems to me, relevant. Because, as I said above, it becomes more plausible that rules-with-sanctions should be narrow in scope and precisely defined when the consequences of breaking them are more severe. Laws whose violation can get you locked up or executed? Yup, it sure seems important that we not have more of those than we need and that you should be able to tell what will and won’t make those things happen to you. The moderation rules of an internet forum, where the absolute worst thing that can happen is that you can no longer post or comment there? Maybe not so much.
Dropping down a level or two of meta, what about the object-level claim that
harms from commenters who hurt other users’ feelings can be successfully mitigated by rules, because such commenters are just trying to make what they see as good comments rather than being sadistic emotional damage maximizers
? I’m not convinced by the argument, just yet; internet forums are fairly well known for sometimes harbouring commenters who are something like sadistic emotional damage maximizers, and even someone who can’t fairly be described that way can get a bee in their bonnet about some other person or community or set of ideas and start behaving rather like a damage-maximizer. Less Wrong seems to me to have fewer such people than most online communities (which may be partly because the moderators work hard to make it so; I don’t know) but it also has more ingenious people than most online communities, and when someone goes wholly or partly damage-maximizer on LW I think we have more of a “nearest unblocked strategy” problem than most places do.
I am not claiming that your conclusion is wrong. It might be right. But I am not convinced by your argument for it.
(I am also not convinced that someone has to be a damage-maximizer in order to behave unhelpfully in the presence of inevitably-incomplete rules.)
(Apologies for the very slow response.)
Well, the post is claiming something along the lines of “our rules shouldn’t try to do too much, and if they don’t then it isn’t impractical to make them all explicit and leave little to individuals’ judgement”, but …
… if it’s claiming that this is universally true, which seems like a strong and counterintuitive claim, then I don’t think it’s made much of an attempt to make its case …
… but if it’s claiming only that it’s true in some particular set of contexts then your choice of examples is highly relevant, because the examples give some indication of what that particular set of contexts is, and because the plausibility of your argument leans on the plausibility of the examples.
So I think I could buy “this post is a very general and philosophical one so what examples I use doesn’t matter much”, or I could buy “this post gives some reason to accept the claims it makes”, but I don’t think I buy them together.
I’d been assuming we were in the second case, but from what you now say it seems like maybe we’re in the first, so let me explain why I find the arguments unconvincing if they are meant to be so general.
You don’t so much argue for the claim you’re making as counter arguments against it. And, in fact, pretty much the only argument against that you consider goes like this: “The space of all possible behaviors is unthinkably vast. What if the formidable intelligence of an adversary who hates everything our Society stands for, comes up with a behavior that’s really bad but isn’t forbidden by any of Society’s rules?”
This is, to my mind, a weird argument. I have never once heard anyone argue for rules that leave some things to human judgement by appealing to the unthinkable vastness of the space of possibilities as opposed to its thinkable and quite comprehensible, but still pretty large, vastness, or by postulating someone “who hates everything our Society stands for” as opposed to someone who merely has some interests or purposes that don’t fit well with ours.
(So I assumed that you had in mind some particular set of contexts in which, for whatever reason, those who want more flexibility in the rules than you do were concerned about such things. If you’re arguing in full generality, then I say: no, most of the time when people think rules need to allow for individual judgement they aren’t appealing to anything so dramatic.)
You say that those who want more flexibility in the rules than you do think that “we need to empower leaders with the Authority to make judgement calls—even to control the minute details of anyone’s behavior, if that’s what it takes to safeguard Society’s Values”.
Again, this is a weird thing for them to think, and doesn’t seem like a good fit for the position of more than a tiny fraction of people I have encountered.
(So I assumed that you had in mind some particular set of contexts in which either the advocates of More Space For Human Judgement are unusually authoritarian, or the domain within which they might seek to “control the minute details of everyone’s behaviour” is so constrained that seeking to do that doesn’t require one to be unusually authoritarian. If you’re arguing in full generality, then I say: no, this is not what the people you profess to be talking about generally want.)
You say that the rules aren’t there to express society’s values, but only to enable people not to kill one another while they express their values. (I take it that “society” and “kill”, at least, should be interpreted somewhat flexibly.)
Whether that’s what some particular set of rules is for or not will vary. It seems to me more credibly true in cases where the sanctions for breaking the rules can be ruinous (as e.g. when the rules are a nation’s laws) and less credibly true when they are much more minor (as e.g. when they are rules about posting on an internet forum).
(So I assumed that you had in mind some particular set of contexts where it’s, at least, plausible that someone might find it self-evident that the rules-with-sanctions-attached should cover only the essentials. If you’re arguing in full generality, then I say: no, there are tradeoffs governing what aspects of your values if any should be covered by rules. You say that human history since Hammurabi has shown that “it mostly works pretty great” but in approximately zero cases since Hammurabi have the rules in fact had scope as limited as you describe.)
Having made the assumption that the best argument for weakly-formalized rules is the existence of ingenious adversaries opposed to everything that you stand for, you then say that such adversaries “mostly aren’t a real thing”.
That’s true in many contexts, but I’m not sure it’s true in full generality. Suppose “your Society” is a political group, for instance; of course your political opponents don’t oppose literally all your values, but they may oppose all the ones that distinguish you as a group, and that “your Society” is working together to promote.
(So I assumed that you had in mind some particular set of contexts that doesn’t face ingenious genuinely-hostile actors. If you’re arguing in full generality, you can’t assume that, and not having to worry about such people is a load-bearing part of your argument.)
Perhaps your point isn’t that you were making an argument so broad that the details of the examples don’t matter, but only that the examples were so appropriate to the particular cases you had in mind. And, in particular, that the life-and-death-ness of the examples isn’t particularly relevant. -- But it seems quite relevant to me, because a key part of your argument is that rules should be narrowly scoped: in your words, they are “just there to stop ourselves from trying to kill each other when your freedom and dignity is getting in the way of my freedom and dignity”. Now, to be sure, “kill” might be understood in some metaphorical way. But when you say “the rules are just there to stop ourselves from trying to kill each other” and present a bunch of examples in which the rules are literally there to stop people dying, along with one example in which they aren’t and which you explicitly say isn’t an example of what you’re pointing at … well, I don’t think you can be too surprised if someone draws the conclusion that your arguments were aimed at life-and-death cases.
And that life-or-death-ness is, it seems to me, relevant. Because, as I said above, it becomes more plausible that rules-with-sanctions should be narrow in scope and precisely defined when the consequences of breaking them are more severe. Laws whose violation can get you locked up or executed? Yup, it sure seems important that we not have more of those than we need and that you should be able to tell what will and won’t make those things happen to you. The moderation rules of an internet forum, where the absolute worst thing that can happen is that you can no longer post or comment there? Maybe not so much.
Dropping down a level or two of meta, what about the object-level claim that
? I’m not convinced by the argument, just yet; internet forums are fairly well known for sometimes harbouring commenters who are something like sadistic emotional damage maximizers, and even someone who can’t fairly be described that way can get a bee in their bonnet about some other person or community or set of ideas and start behaving rather like a damage-maximizer. Less Wrong seems to me to have fewer such people than most online communities (which may be partly because the moderators work hard to make it so; I don’t know) but it also has more ingenious people than most online communities, and when someone goes wholly or partly damage-maximizer on LW I think we have more of a “nearest unblocked strategy” problem than most places do.
I am not claiming that your conclusion is wrong. It might be right. But I am not convinced by your argument for it.
(I am also not convinced that someone has to be a damage-maximizer in order to behave unhelpfully in the presence of inevitably-incomplete rules.)