Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.
I’d suggest evaluating the policy of ‘demand that an approved norm be in place until the scheduled vote’ on the first scheduled vote following each scheduled vote in which ‘a norm was dropped that people wanted to have it dropped mid-cycle but couldn’t because of the policy’.
Your suggestion makes sense for an experiment, but misses the whole point of this experiment. This, to me, seems like exactly the unpleasant valley dynamic. “We tried holding ourselves to a standard of ‘we finish the experiments that we start,’ but we got a couple of experiments in and we didn’t like it. Let’s stop.”
“Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs.”
If you have no norm for evaluating that rule explicitly, it doesn’t mean that you won’t evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won’t quickly learn to put exit clauses in experiments that are likely to need them ‘notwithstanding any other provision’ is failing to accurately predict.
I think you miss the point that Duncan wants to train the ability to be out-of-comfort zone by following through on goals that are set.
A norm being very annoying wouldn’t be a reason to drop it before the scheduled vote. The norm would have to actually create substantial harm.
I read that “this is causing substantial harm” would be insufficient to cancel a norm, but expect that “this is creating a physical hazard” would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there’s a false negative in a mideterm evaluation of danger...
Maybe I’m concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.
Duncan’s rule one is “A Dragon will protect itself”.
I don’t think whether something is physical would be the prime distinction but whether the harm is substantial. If following a norm would likely result in someone losing his job, that isn’t physical harm but substantial harm that likely warrants violating the norm.
Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.
I’d suggest evaluating the policy of ‘demand that an approved norm be in place until the scheduled vote’ on the first scheduled vote following each scheduled vote in which ‘a norm was dropped that people wanted to have it dropped mid-cycle but couldn’t because of the policy’.
Your suggestion makes sense for an experiment, but misses the whole point of this experiment. This, to me, seems like exactly the unpleasant valley dynamic. “We tried holding ourselves to a standard of ‘we finish the experiments that we start,’ but we got a couple of experiments in and we didn’t like it. Let’s stop.”
“Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs.”
If you have no norm for evaluating that rule explicitly, it doesn’t mean that you won’t evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won’t quickly learn to put exit clauses in experiments that are likely to need them ‘notwithstanding any other provision’ is failing to accurately predict.
I think you miss the point that Duncan wants to train the ability to be out-of-comfort zone by following through on goals that are set. A norm being very annoying wouldn’t be a reason to drop it before the scheduled vote. The norm would have to actually create substantial harm.
I read that “this is causing substantial harm” would be insufficient to cancel a norm, but expect that “this is creating a physical hazard” would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there’s a false negative in a mideterm evaluation of danger...
Maybe I’m concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.
Duncan’s rule one is “A Dragon will protect itself”.
I don’t think whether something is physical would be the prime distinction but whether the harm is substantial. If following a norm would likely result in someone losing his job, that isn’t physical harm but substantial harm that likely warrants violating the norm.