Who are these “people who think rules are unworkable and want to empower an Authority to make judgement calls controlling the minute details of everyone’s behaviour”?
(I’m not sure whether that’s a serious answer or snark, but I’ll treat it as the former since I don’t think I have anything much to say to the latter.)
If that’s what Zack means, then it seems odd that the OP puts so much emphasis on how rules can be an effective way to keep people safer from literal death, brain damage, and the like.
The rules are just there to stop ourselves from trying to kill each other when your freedom and dignity is getting in the way of my freedom and dignity, so that we can focus on creating Value instead of wasting effort trying to kill each other. [...] Traffic laws make it clear to everyone when it’s safe to enter the road. If everyone just entered the road whenever they felt like it, that would be dangerous [...] Lead paint is an environmental hazard, so it was banned in 1978. Because of the ban, paint manufacturers stopped making lead paint. [...] So paint manufacturers still ended up using mercury in some paints until 1991 when that was banned, too. But once it was banned, they stopped.
Zack did mention one example of a rule that was about something less life-and-death-y, namely income tax—but that was an example of where nice clear rules aren’t so effective.
So I think either Zack wasn’t primarily taking aim at the LW moderators or his choices of examples are systematically ill-fitted to what they’re meant to be illustrating.
The post is about the philosophy of rulemaking. In order to ground my thinking about the topic, I need to mention some concrete examples: I think it would be a much worse post if I just talked about rules in general without considering any specific examples of rules. But I don’t think the philosophical substance is particularly affected by how “life-or-deathy” the examples were, and I don’t understand why you think it would be. (If for some reason I wrote a blog post about how 2x+3x=5x, the claim would hold whether x=1 or x=99999.) Can you explain why you think the life-or-deathness is relevant? (As it happens, I had considered using a “noise pollution” example about people who like to throw parties before ultimately going with the lead paint example. I’m having trouble reconstructing from introspection and memory why I went with the paint rather than the parties, but I don’t think it matters much.)
Now, it’s also true that the reason I was thinking about these aspects of the philosophy of rulemaking recently is because it had been relevant to a moderation dispute on this website. To spell it out, I think that harms from commenters who hurt other users’ feelings can be successfully mitigated by rules, because such commenters are just trying to make what they see as good comments rather than being sadistic emotional damage maximizers, similarly to how paint manufacturers are just trying to make high-quality paint rather than being environmental lead maximizers.
The fact that that’s why I was thinking about the philosophy of rulemaking is not a secret. You’re allowed to notice that, and I’m happy to spell it out in the comments. But I dispute that that’s what the post is “really” about (at least for some relevant notion of “really”).
Well, the post is claiming something along the lines of “our rules shouldn’t try to do too much, and if they don’t then it isn’t impractical to make them all explicit and leave little to individuals’ judgement”, but …
… if it’s claiming that this is universally true, which seems like a strong and counterintuitive claim, then I don’t think it’s made much of an attempt to make its case …
… but if it’s claiming only that it’s true in some particular set of contexts then your choice of examples is highly relevant, because the examples give some indication of what that particular set of contexts is, and because the plausibility of your argument leans on the plausibility of the examples.
So I think I could buy “this post is a very general and philosophical one so what examples I use doesn’t matter much”, or I could buy “this post gives some reason to accept the claims it makes”, but I don’t think I buy them together.
I’d been assuming we were in the second case, but from what you now say it seems like maybe we’re in the first, so let me explain why I find the arguments unconvincing if they are meant to be so general.
You don’t so much argue for the claim you’re making as counter arguments against it. And, in fact, pretty much the only argument against that you consider goes like this: “The space of all possible behaviors is unthinkably vast. What if the formidable intelligence of an adversary who hates everything our Society stands for, comes up with a behavior that’s really bad but isn’t forbidden by any of Society’s rules?”
This is, to my mind, a weird argument. I have never once heard anyone argue for rules that leave some things to human judgement by appealing to the unthinkable vastness of the space of possibilities as opposed to its thinkable and quite comprehensible, but still pretty large, vastness, or by postulating someone “who hates everything our Society stands for” as opposed to someone who merely has some interests or purposes that don’t fit well with ours.
(So I assumed that you had in mind some particular set of contexts in which, for whatever reason, those who want more flexibility in the rules than you do were concerned about such things. If you’re arguing in full generality, then I say: no, most of the time when people think rules need to allow for individual judgement they aren’t appealing to anything so dramatic.)
You say that those who want more flexibility in the rules than you do think that “we need to empower leaders with the Authority to make judgement calls—even to control the minute details of anyone’s behavior, if that’s what it takes to safeguard Society’s Values”.
Again, this is a weird thing for them to think, and doesn’t seem like a good fit for the position of more than a tiny fraction of people I have encountered.
(So I assumed that you had in mind some particular set of contexts in which either the advocates of More Space For Human Judgement are unusually authoritarian, or the domain within which they might seek to “control the minute details of everyone’s behaviour” is so constrained that seeking to do that doesn’t require one to be unusually authoritarian. If you’re arguing in full generality, then I say: no, this is not what the people you profess to be talking about generally want.)
You say that the rules aren’t there to express society’s values, but only to enable people not to kill one another while they express their values. (I take it that “society” and “kill”, at least, should be interpreted somewhat flexibly.)
Whether that’s what some particular set of rules is for or not will vary. It seems to me more credibly true in cases where the sanctions for breaking the rules can be ruinous (as e.g. when the rules are a nation’s laws) and less credibly true when they are much more minor (as e.g. when they are rules about posting on an internet forum).
(So I assumed that you had in mind some particular set of contexts where it’s, at least, plausible that someone might find it self-evident that the rules-with-sanctions-attached should cover only the essentials. If you’re arguing in full generality, then I say: no, there are tradeoffs governing what aspects of your values if any should be covered by rules. You say that human history since Hammurabi has shown that “it mostly works pretty great” but in approximately zero cases since Hammurabi have the rules in fact had scope as limited as you describe.)
Having made the assumption that the best argument for weakly-formalized rules is the existence of ingenious adversaries opposed to everything that you stand for, you then say that such adversaries “mostly aren’t a real thing”.
That’s true in many contexts, but I’m not sure it’s true in full generality. Suppose “your Society” is a political group, for instance; of course your political opponents don’t oppose literally all your values, but they may oppose all the ones that distinguish you as a group, and that “your Society” is working together to promote.
(So I assumed that you had in mind some particular set of contexts that doesn’t face ingenious genuinely-hostile actors. If you’re arguing in full generality, you can’t assume that, and not having to worry about such people is a load-bearing part of your argument.)
Perhaps your point isn’t that you were making an argument so broad that the details of the examples don’t matter, but only that the examples were so appropriate to the particular cases you had in mind. And, in particular, that the life-and-death-ness of the examples isn’t particularly relevant. -- But it seems quite relevant to me, because a key part of your argument is that rules should be narrowly scoped: in your words, they are “just there to stop ourselves from trying to kill each other when your freedom and dignity is getting in the way of my freedom and dignity”. Now, to be sure, “kill” might be understood in some metaphorical way. But when you say “the rules are just there to stop ourselves from trying to kill each other” and present a bunch of examples in which the rules are literally there to stop people dying, along with one example in which they aren’t and which you explicitly say isn’t an example of what you’re pointing at … well, I don’t think you can be too surprised if someone draws the conclusion that your arguments were aimed at life-and-death cases.
And that life-or-death-ness is, it seems to me, relevant. Because, as I said above, it becomes more plausible that rules-with-sanctions should be narrow in scope and precisely defined when the consequences of breaking them are more severe. Laws whose violation can get you locked up or executed? Yup, it sure seems important that we not have more of those than we need and that you should be able to tell what will and won’t make those things happen to you. The moderation rules of an internet forum, where the absolute worst thing that can happen is that you can no longer post or comment there? Maybe not so much.
Dropping down a level or two of meta, what about the object-level claim that
harms from commenters who hurt other users’ feelings can be successfully mitigated by rules, because such commenters are just trying to make what they see as good comments rather than being sadistic emotional damage maximizers
? I’m not convinced by the argument, just yet; internet forums are fairly well known for sometimes harbouring commenters who are something like sadistic emotional damage maximizers, and even someone who can’t fairly be described that way can get a bee in their bonnet about some other person or community or set of ideas and start behaving rather like a damage-maximizer. Less Wrong seems to me to have fewer such people than most online communities (which may be partly because the moderators work hard to make it so; I don’t know) but it also has more ingenious people than most online communities, and when someone goes wholly or partly damage-maximizer on LW I think we have more of a “nearest unblocked strategy” problem than most places do.
I am not claiming that your conclusion is wrong. It might be right. But I am not convinced by your argument for it.
(I am also not convinced that someone has to be a damage-maximizer in order to behave unhelpfully in the presence of inevitably-incomplete rules.)
I believe this post to be substantially motivated by Zack’s disagreement with LessWrong moderators about appropriate norms on LessWrong. (Epistemic status: I am one of the moderators who spoke to Zack on the subject, as indicated[1] in the footer of his post.)
If that’s what Zack means, then it seems odd that the OP puts so much emphasis on how rules can be an effective way to keep people safer from literal death, brain damage, and the like.
Oh, I didn’t mean that the LW mods are the only examples of this sort of thing. But you did mention that you’d never encountered such people, and my response was to say that yes, you have indeed.
As for Zack’s examples, I think that they illustrate fairly well the general principle that he’s describing. I’ll leave it to him (or others) to answer the criticism beyond that.
Noted. For what it’s worth, I haven’t as yet seen much reason to believe that the LW moderators think they are empowered “to make judgement calls controlling the minute details of everyone’s behaviour”.
If you delete the word “the”, or better yet make things more explicit along the lines of “to make judgement calls about small details of what behaviour is acceptable when posting/commenting on Less Wrong”, then I dare say the moderators consider themselves empowered to do that. But that’s very much not the sort of thing that I understand when I read “make judgement calls controlling the minute details of everyone’s behaviour”, especially not at the end of an article full of examples about regulation of neurotoxic chemicals and road safety.
So perhaps this is yet another case where Zack says something that can with substantial stretching be interpreted as something true, but that (at least as it seems to me; others’ intuitions may differ) is an extremely unnatural way to say what he claims was all he was saying, and that strongly suggests something much worse but that is not actually true. I am beginning to find it rather tiring.
That would be the mods of Less Wrong.
(I’m not sure whether that’s a serious answer or snark, but I’ll treat it as the former since I don’t think I have anything much to say to the latter.)
If that’s what Zack means, then it seems odd that the OP puts so much emphasis on how rules can be an effective way to keep people safer from literal death, brain damage, and the like.
Zack did mention one example of a rule that was about something less life-and-death-y, namely income tax—but that was an example of where nice clear rules aren’t so effective.
So I think either Zack wasn’t primarily taking aim at the LW moderators or his choices of examples are systematically ill-fitted to what they’re meant to be illustrating.
The post is about the philosophy of rulemaking. In order to ground my thinking about the topic, I need to mention some concrete examples: I think it would be a much worse post if I just talked about rules in general without considering any specific examples of rules. But I don’t think the philosophical substance is particularly affected by how “life-or-deathy” the examples were, and I don’t understand why you think it would be. (If for some reason I wrote a blog post about how 2x+3x=5x, the claim would hold whether x=1 or x=99999.) Can you explain why you think the life-or-deathness is relevant? (As it happens, I had considered using a “noise pollution” example about people who like to throw parties before ultimately going with the lead paint example. I’m having trouble reconstructing from introspection and memory why I went with the paint rather than the parties, but I don’t think it matters much.)
Now, it’s also true that the reason I was thinking about these aspects of the philosophy of rulemaking recently is because it had been relevant to a moderation dispute on this website. To spell it out, I think that harms from commenters who hurt other users’ feelings can be successfully mitigated by rules, because such commenters are just trying to make what they see as good comments rather than being sadistic emotional damage maximizers, similarly to how paint manufacturers are just trying to make high-quality paint rather than being environmental lead maximizers.
The fact that that’s why I was thinking about the philosophy of rulemaking is not a secret. You’re allowed to notice that, and I’m happy to spell it out in the comments. But I dispute that that’s what the post is “really” about (at least for some relevant notion of “really”).
(Apologies for the very slow response.)
Well, the post is claiming something along the lines of “our rules shouldn’t try to do too much, and if they don’t then it isn’t impractical to make them all explicit and leave little to individuals’ judgement”, but …
… if it’s claiming that this is universally true, which seems like a strong and counterintuitive claim, then I don’t think it’s made much of an attempt to make its case …
… but if it’s claiming only that it’s true in some particular set of contexts then your choice of examples is highly relevant, because the examples give some indication of what that particular set of contexts is, and because the plausibility of your argument leans on the plausibility of the examples.
So I think I could buy “this post is a very general and philosophical one so what examples I use doesn’t matter much”, or I could buy “this post gives some reason to accept the claims it makes”, but I don’t think I buy them together.
I’d been assuming we were in the second case, but from what you now say it seems like maybe we’re in the first, so let me explain why I find the arguments unconvincing if they are meant to be so general.
You don’t so much argue for the claim you’re making as counter arguments against it. And, in fact, pretty much the only argument against that you consider goes like this: “The space of all possible behaviors is unthinkably vast. What if the formidable intelligence of an adversary who hates everything our Society stands for, comes up with a behavior that’s really bad but isn’t forbidden by any of Society’s rules?”
This is, to my mind, a weird argument. I have never once heard anyone argue for rules that leave some things to human judgement by appealing to the unthinkable vastness of the space of possibilities as opposed to its thinkable and quite comprehensible, but still pretty large, vastness, or by postulating someone “who hates everything our Society stands for” as opposed to someone who merely has some interests or purposes that don’t fit well with ours.
(So I assumed that you had in mind some particular set of contexts in which, for whatever reason, those who want more flexibility in the rules than you do were concerned about such things. If you’re arguing in full generality, then I say: no, most of the time when people think rules need to allow for individual judgement they aren’t appealing to anything so dramatic.)
You say that those who want more flexibility in the rules than you do think that “we need to empower leaders with the Authority to make judgement calls—even to control the minute details of anyone’s behavior, if that’s what it takes to safeguard Society’s Values”.
Again, this is a weird thing for them to think, and doesn’t seem like a good fit for the position of more than a tiny fraction of people I have encountered.
(So I assumed that you had in mind some particular set of contexts in which either the advocates of More Space For Human Judgement are unusually authoritarian, or the domain within which they might seek to “control the minute details of everyone’s behaviour” is so constrained that seeking to do that doesn’t require one to be unusually authoritarian. If you’re arguing in full generality, then I say: no, this is not what the people you profess to be talking about generally want.)
You say that the rules aren’t there to express society’s values, but only to enable people not to kill one another while they express their values. (I take it that “society” and “kill”, at least, should be interpreted somewhat flexibly.)
Whether that’s what some particular set of rules is for or not will vary. It seems to me more credibly true in cases where the sanctions for breaking the rules can be ruinous (as e.g. when the rules are a nation’s laws) and less credibly true when they are much more minor (as e.g. when they are rules about posting on an internet forum).
(So I assumed that you had in mind some particular set of contexts where it’s, at least, plausible that someone might find it self-evident that the rules-with-sanctions-attached should cover only the essentials. If you’re arguing in full generality, then I say: no, there are tradeoffs governing what aspects of your values if any should be covered by rules. You say that human history since Hammurabi has shown that “it mostly works pretty great” but in approximately zero cases since Hammurabi have the rules in fact had scope as limited as you describe.)
Having made the assumption that the best argument for weakly-formalized rules is the existence of ingenious adversaries opposed to everything that you stand for, you then say that such adversaries “mostly aren’t a real thing”.
That’s true in many contexts, but I’m not sure it’s true in full generality. Suppose “your Society” is a political group, for instance; of course your political opponents don’t oppose literally all your values, but they may oppose all the ones that distinguish you as a group, and that “your Society” is working together to promote.
(So I assumed that you had in mind some particular set of contexts that doesn’t face ingenious genuinely-hostile actors. If you’re arguing in full generality, you can’t assume that, and not having to worry about such people is a load-bearing part of your argument.)
Perhaps your point isn’t that you were making an argument so broad that the details of the examples don’t matter, but only that the examples were so appropriate to the particular cases you had in mind. And, in particular, that the life-and-death-ness of the examples isn’t particularly relevant. -- But it seems quite relevant to me, because a key part of your argument is that rules should be narrowly scoped: in your words, they are “just there to stop ourselves from trying to kill each other when your freedom and dignity is getting in the way of my freedom and dignity”. Now, to be sure, “kill” might be understood in some metaphorical way. But when you say “the rules are just there to stop ourselves from trying to kill each other” and present a bunch of examples in which the rules are literally there to stop people dying, along with one example in which they aren’t and which you explicitly say isn’t an example of what you’re pointing at … well, I don’t think you can be too surprised if someone draws the conclusion that your arguments were aimed at life-and-death cases.
And that life-or-death-ness is, it seems to me, relevant. Because, as I said above, it becomes more plausible that rules-with-sanctions should be narrow in scope and precisely defined when the consequences of breaking them are more severe. Laws whose violation can get you locked up or executed? Yup, it sure seems important that we not have more of those than we need and that you should be able to tell what will and won’t make those things happen to you. The moderation rules of an internet forum, where the absolute worst thing that can happen is that you can no longer post or comment there? Maybe not so much.
Dropping down a level or two of meta, what about the object-level claim that
? I’m not convinced by the argument, just yet; internet forums are fairly well known for sometimes harbouring commenters who are something like sadistic emotional damage maximizers, and even someone who can’t fairly be described that way can get a bee in their bonnet about some other person or community or set of ideas and start behaving rather like a damage-maximizer. Less Wrong seems to me to have fewer such people than most online communities (which may be partly because the moderators work hard to make it so; I don’t know) but it also has more ingenious people than most online communities, and when someone goes wholly or partly damage-maximizer on LW I think we have more of a “nearest unblocked strategy” problem than most places do.
I am not claiming that your conclusion is wrong. It might be right. But I am not convinced by your argument for it.
(I am also not convinced that someone has to be a damage-maximizer in order to behave unhelpfully in the presence of inevitably-incomplete rules.)
I believe this post to be substantially motivated by Zack’s disagreement with LessWrong moderators about appropriate norms on LessWrong. (Epistemic status: I am one of the moderators who spoke to Zack on the subject, as indicated[1] in the footer of his post.)
Sort of.
Oh, I didn’t mean that the LW mods are the only examples of this sort of thing. But you did mention that you’d never encountered such people, and my response was to say that yes, you have indeed.
As for Zack’s examples, I think that they illustrate fairly well the general principle that he’s describing. I’ll leave it to him (or others) to answer the criticism beyond that.
Noted. For what it’s worth, I haven’t as yet seen much reason to believe that the LW moderators think they are empowered “to make judgement calls controlling the minute details of everyone’s behaviour”.
If you delete the word “the”, or better yet make things more explicit along the lines of “to make judgement calls about small details of what behaviour is acceptable when posting/commenting on Less Wrong”, then I dare say the moderators consider themselves empowered to do that. But that’s very much not the sort of thing that I understand when I read “make judgement calls controlling the minute details of everyone’s behaviour”, especially not at the end of an article full of examples about regulation of neurotoxic chemicals and road safety.
So perhaps this is yet another case where Zack says something that can with substantial stretching be interpreted as something true, but that (at least as it seems to me; others’ intuitions may differ) is an extremely unnatural way to say what he claims was all he was saying, and that strongly suggests something much worse but that is not actually true. I am beginning to find it rather tiring.