This is bullshit. It doesn’t matter what the absolute risk of picking up the penny is; what matters is whether not picking up the penny is less risky than picking up the penny. And, in general, there is no particular reason to believe that there is any difference in risk between picking it up and not picking it up.
This is a deep and subtle fallacy. A course of action (or plan) that involves picking up a penny is more complicated than one that does not. Complex plans must be penalized in the same way that complex theories must be (a plan is really just a theory that a certain course of action will lead to a good outcome).
One of my strong personal beliefs is that people drastically underestimate how much complexity hurts.
Complex plans must be penalized in the same way that complex theories must be (a plan is really just a theory that a certain course of action will lead to a good outcome).
Mm, no. Theories are mutually exclusive. The thing that makes a complex theory unlikely is the fact that it has 10^N, for large N, other theories to compete with. By your definition of a plan, plans are not mutually exclusive, so this analogy vanishes. Of course, you could define a plan as the theory that a certain course of action will lead to the best outcome out of all possible plans, in which case your statement would be true, but wouldn’t apply: the agent that finds the best possible plan before acting starves to death, decays, and is forgotten long before ever eating the food in front of it.
I find this comment confusing. Represent a plan as a list of discrete actions. Then “X,Y” is different from “X,Y,Z” where X/Y/Z are actions. Picking “X,Y,Z” means you did not pick “X,Y”. The number of plans is also exponential in the number of actions, so a more complex (longer) plan has an exponentially greater number of competitors.
You defined a plan as a theory that a certain course of action will lead to a good outcome. Picking “X,Y,Z” does not mean you don’t think “X,Y” will also lead to a good outcome. Therefore, they aren’t really competitors.
That isn’t a fallacy. You just claim (right or wrong) to have more evidence about the risks in question.
A course of action (or plan) that involves picking up a penny is more complicated than one that does not.
Perhaps. But what if the two plans in question are as follows?
Keep on walking without picking up the penny.
That’s right. 2 isn’t a plan. You go about walking down the street. Hundreds of muscles move with the calibration necessary to maintain both balance and forward momentum under the influence of gravity. You respond to stimulus from the inner ear to correct imbalance. You instinctively compensate for unexpected instabilities in your footing. If you see a cricket ball (base ball if any Americans are confused) flying out at you from a nearby backyard you duck or catch it. You pick up money.
Now, I’m not saying the above is how I react. I see 5c and 10c coins and have a mild aversive reaction that balances the money grab impulse. But I am not comfortable in asserting that my reactions are particularly safer because they use less actual action. The components of plans that hurt me are the initiative and executive requirements. “Just take the money” may be the better option in such a situation and my inhibition just a premature optimisation.
I’m not really sure what you mean. Of course it is hard to assign complexities to things like plans. Even more difficult is the problem of weighing increased complexity against increased performance. But it’s strange to object to the statement that of the two plans {X, X+pick up penny}, the latter is more complicated.
But it’s strange to object to the statement that of the two plans {X, X+pick up penny}, the latter is more complicated.
I suggest that for people with a habit of picking up currency that they come across the plans, as implemented by their brains would be better described as {Y, Y + suppress pick-up-valued-item impulse in the case of pennies}. This sort of change is (a trivial instance of) the complexity that most matters to humans. Until the habit is entrenched the stimulus of a penny will have a transient but measurable deleterious effect on concentration. The risk of, for example, stumbling is also increased, with a well practised visual stimulus to motor action coupling interrupted by executive override.
I argue that the optimisation of literal penny grabbing protocols is an instance of metaphorical penny grabbing that does, in fact, waste more than it saves, as measured by complexity
I suggest that for people with a habit of picking up currency that they come across the plans, as implemented by their brains would be better described as {Y, Y + suppress pick-up-valued-item impulse in the case of pennies}.
I buy this point, but it seems like we’re talking about different issues.
Penny grabbing is probably not a good example of the problem I am really concerned with. This is the situation where there is some action X that we have to decide whether or not to do, and we have relatively little information to bring to bear. A good example came up in the financial crisis: “should we bail out the banks?” Well, some people think we should, others disagree. There are all these hard-to-evaluate pros and cons in the outcome-prediction process, but at the end of the day the pros seem to add up to a bit more than the cons.
The problem is that some people see the question as symmetric: either we bail out the banks or we don’t. No reason to prefer a priori one course of action over the other. But my whole point is that we should prefer the simpler action a priori, and require much more concrete evidence in favor of the complex plan before we choose it.
There are all these hard-to-evaluate pros and cons in the outcome-prediction process, but at the end of the day the pros seem to add up to a bit more than the cons.
An interesting topic and I disagree.
The problem is that some people see the question as symmetric: either we bail out the banks or we don’t. No reason to prefer a priori one course of action over the other. But my whole point is that we should prefer the simpler action a priori, and require much more concrete evidence in favor of the complex plan before we choose it.
I read your example and thought {X, X + bail out banks}. But since you are a bailout advocate I suppose you wouldn’t have brought up the example if you thought the complexity weighed in against your position. So I infer that you mean {bail out banks, do a more complex thing Y}. For many instances of Y that people are likely to propose I expect I would agree with your judgement, with the complexity being a significant factor.
But since you are a bailout advocate I suppose you wouldn’t have brought up the example if you thought the complexity weighed in against your position. So I infer that you mean {bail out banks, do a more complex thing Y}.
No, I’m against the bailout! My point is that the bailout is complex, therefore it should be penalized even if it seems to have a good predicted outcome. I see the choice as {X=no intervention, Y=bail out banks}; the former is far simpler, so it should be preferred.
No, I’m against the bailout! My point is that the bailout is complex, therefore it should be penalized even if it seems to have a good predicted outcome. I see the choice as {X=no intervention, Y=bail out banks}; the former is far simpler, so it should be preferred.
Ahh, ok. I read the “the pros seem to add up to a bit more than the cons” part and assumed the reverse.
This is a deep and subtle fallacy. A course of action (or plan) that involves picking up a penny is more complicated than one that does not. Complex plans must be penalized in the same way that complex theories must be (a plan is really just a theory that a certain course of action will lead to a good outcome).
One of my strong personal beliefs is that people drastically underestimate how much complexity hurts.
Mm, no. Theories are mutually exclusive. The thing that makes a complex theory unlikely is the fact that it has 10^N, for large N, other theories to compete with. By your definition of a plan, plans are not mutually exclusive, so this analogy vanishes. Of course, you could define a plan as the theory that a certain course of action will lead to the best outcome out of all possible plans, in which case your statement would be true, but wouldn’t apply: the agent that finds the best possible plan before acting starves to death, decays, and is forgotten long before ever eating the food in front of it.
I find this comment confusing. Represent a plan as a list of discrete actions. Then “X,Y” is different from “X,Y,Z” where X/Y/Z are actions. Picking “X,Y,Z” means you did not pick “X,Y”. The number of plans is also exponential in the number of actions, so a more complex (longer) plan has an exponentially greater number of competitors.
You defined a plan as a theory that a certain course of action will lead to a good outcome. Picking “X,Y,Z” does not mean you don’t think “X,Y” will also lead to a good outcome. Therefore, they aren’t really competitors.
That isn’t a fallacy. You just claim (right or wrong) to have more evidence about the risks in question.
Perhaps. But what if the two plans in question are as follows?
Keep on walking without picking up the penny.
That’s right. 2 isn’t a plan. You go about walking down the street. Hundreds of muscles move with the calibration necessary to maintain both balance and forward momentum under the influence of gravity. You respond to stimulus from the inner ear to correct imbalance. You instinctively compensate for unexpected instabilities in your footing. If you see a cricket ball (base ball if any Americans are confused) flying out at you from a nearby backyard you duck or catch it. You pick up money.
Now, I’m not saying the above is how I react. I see 5c and 10c coins and have a mild aversive reaction that balances the money grab impulse. But I am not comfortable in asserting that my reactions are particularly safer because they use less actual action. The components of plans that hurt me are the initiative and executive requirements. “Just take the money” may be the better option in such a situation and my inhibition just a premature optimisation.
I’m not really sure what you mean. Of course it is hard to assign complexities to things like plans. Even more difficult is the problem of weighing increased complexity against increased performance. But it’s strange to object to the statement that of the two plans {X, X+pick up penny}, the latter is more complicated.
I suggest that for people with a habit of picking up currency that they come across the plans, as implemented by their brains would be better described as {Y, Y + suppress pick-up-valued-item impulse in the case of pennies}. This sort of change is (a trivial instance of) the complexity that most matters to humans. Until the habit is entrenched the stimulus of a penny will have a transient but measurable deleterious effect on concentration. The risk of, for example, stumbling is also increased, with a well practised visual stimulus to motor action coupling interrupted by executive override.
I argue that the optimisation of literal penny grabbing protocols is an instance of metaphorical penny grabbing that does, in fact, waste more than it saves, as measured by complexity
I buy this point, but it seems like we’re talking about different issues.
Penny grabbing is probably not a good example of the problem I am really concerned with. This is the situation where there is some action X that we have to decide whether or not to do, and we have relatively little information to bring to bear. A good example came up in the financial crisis: “should we bail out the banks?” Well, some people think we should, others disagree. There are all these hard-to-evaluate pros and cons in the outcome-prediction process, but at the end of the day the pros seem to add up to a bit more than the cons.
The problem is that some people see the question as symmetric: either we bail out the banks or we don’t. No reason to prefer a priori one course of action over the other. But my whole point is that we should prefer the simpler action a priori, and require much more concrete evidence in favor of the complex plan before we choose it.
An interesting topic and I disagree.
I read your example and thought {X, X + bail out banks}. But since you are a bailout advocate I suppose you wouldn’t have brought up the example if you thought the complexity weighed in against your position. So I infer that you mean {bail out banks, do a more complex thing Y}. For many instances of Y that people are likely to propose I expect I would agree with your judgement, with the complexity being a significant factor.
No, I’m against the bailout! My point is that the bailout is complex, therefore it should be penalized even if it seems to have a good predicted outcome. I see the choice as {X=no intervention, Y=bail out banks}; the former is far simpler, so it should be preferred.
Ahh, ok. I read the “the pros seem to add up to a bit more than the cons” part and assumed the reverse.