In hindsight, whoever gave my comment its initial “-1 point” ding was correct: although I thought “Why wouldn’t you just rewrite your source code” was a flippant question that doesn’t mean it deserved just a joking answer. So, some more serious answers:
Your delegates are more powerful because they are known to have fewer choices and because they are known to value those choices differently, which can prevent them from being subject to threats or affected by precommitments that might have been useful against you.
I wouldn’t rewrite my source code because, as I joked, I can’t.. but even if I could, doing so would only be effective if there were some way of also convincing other agents that I wasn’t deceiving them about my new source code. This may not be practical: for every program that does X when tested, returns source code for “do X” when requested, and does X in the real world, there exists another program which does X when tested, returns source code for “do X” when requested, and does Y in the real world. See the concern over electronic voting machines for a more contemporary example of the problem.
Whether I would just not do something is irrelevant—what matters is whether everyone interacting with me believes I will do it. It’s easier for a customer to believe that a cashier won’t exceed his authority than for a customer to believe that an owner won’t accept a still-mutually-beneficial bargain, even if the owner swears that he precommitted not to haggle.
Wild speculation: There are instances where evolution seems to have built “one-boxing” type adaptations into humanity, and in those cases we seem to find precommitment claims plausible. If someone is hurt badly enough then they may want revenge even if taking revenge hurts them further. If someone is treated generously enough then they may be generous in return despite not wanting anything further from their benefactor. A lot of the “irrational” emotions look a lot like rational precommitments from the right perspective. But if you find yourself wishing you could precommit in a situation where apes aren’t known for precommitting, it might be too late—the precommitment only helps if it’s believed. Delegation is one of the ways you can make a precommitment more believable.
Someone really should write a “Cliffs Notes for Schelling” sequence. I’d naturally prefer “someone else”, but if nobody starts it by December I suppose I’ll try writing an intro post in January.
Rewriting my source code is tricky; I always start to get dizzy from the blood loss before the saw is even halfway through my skull.
In hindsight, whoever gave my comment its initial “-1 point” ding was correct: although I thought “Why wouldn’t you just rewrite your source code” was a flippant question that doesn’t mean it deserved just a joking answer. So, some more serious answers:
Your delegates are more powerful because they are known to have fewer choices and because they are known to value those choices differently, which can prevent them from being subject to threats or affected by precommitments that might have been useful against you.
I wouldn’t rewrite my source code because, as I joked, I can’t.. but even if I could, doing so would only be effective if there were some way of also convincing other agents that I wasn’t deceiving them about my new source code. This may not be practical: for every program that does X when tested, returns source code for “do X” when requested, and does X in the real world, there exists another program which does X when tested, returns source code for “do X” when requested, and does Y in the real world. See the concern over electronic voting machines for a more contemporary example of the problem.
Whether I would just not do something is irrelevant—what matters is whether everyone interacting with me believes I will do it. It’s easier for a customer to believe that a cashier won’t exceed his authority than for a customer to believe that an owner won’t accept a still-mutually-beneficial bargain, even if the owner swears that he precommitted not to haggle.
Wild speculation: There are instances where evolution seems to have built “one-boxing” type adaptations into humanity, and in those cases we seem to find precommitment claims plausible. If someone is hurt badly enough then they may want revenge even if taking revenge hurts them further. If someone is treated generously enough then they may be generous in return despite not wanting anything further from their benefactor. A lot of the “irrational” emotions look a lot like rational precommitments from the right perspective. But if you find yourself wishing you could precommit in a situation where apes aren’t known for precommitting, it might be too late—the precommitment only helps if it’s believed. Delegation is one of the ways you can make a precommitment more believable.
Someone really should write a “Cliffs Notes for Schelling” sequence. I’d naturally prefer “someone else”, but if nobody starts it by December I suppose I’ll try writing an intro post in January.