# Mitchell_Porter comments on Spend twice as much effort every time you attempt to solve a problem

• In gambling, the strategy of repeatedly “doubling down” is called a martingale (google to find previous discussions on LW), and the main criticism is that you may run out of money to bet, before chance finally turns in your favor. Analogously, your formal analysis here doesn’t take into account the possibility of running out of time or energy before the desired goal is achieved.

I also have trouble interpreting your concrete example, as an application of the proclaimed strategy. I thought the idea was, you don’t know how hard it will be to do something, but you commit to doubling your effort each time you try, however many attempts are necessary. In the example, the goal is apparently to discover whether you should pursue a career in research.

Your proposition is, rather than just jump into a three-year degree, test the waters in a strategy of escalating commitment that starts with as little as three minutes. Maybe you’ll find out whether research is the life for you, with only minutes or weeks or months spent, rather than years. OK, that sounds reasonable.

It’s as if you’re applying the model of escalating commitment from a different angle than usual. Usually it’s interpreted as meaning: try, try harder, try harder, try HARDER, try REALLY HARD—and eventually you’ll get what you want. Whereas you’re saying: instead of beginning with a big commitment, start with the smallest amount of effort possible, that could conceivably help you decide if it’s worth it; and then escalate from there, until you know. Probably this already had a name, but I’ll call it an “acorn martingale”, where “acorn” means that you start as small as possible.

Basically, the gap between your formal argument and your concrete example, is that you never formally say, “be sure to start your martingale with as small a bet/​effort as possible”. And I think also implicit in your example, is that you know an upper bound on d: doing the full three-year degree should be enough to find out whether you should have done it. You don’t mention the possibility that to find out whether you really like research, you might have to follow the degree with six years as a postdoc, twelve years as a professor, zillion years as a posthuman gedankenbot… which is where the classic martingale leads.

But if you do know an upper bound on the task difficulty d, then it should be possible to formally argue that an acorn martingale is optimal, and to view your concrete scenario as an example of this.

• In the gambling house, we already know that gambling isn’t worth it. The error of the martingale strategy is to think that you can turn lots of non-worthwhile bets into one big worthwhile bet by doubling down (which would be true if you didn’t run out of money).

In the world outside the gambling house, we don’t do things unless we expect that they’re worth the effort. But we might not know what scale of effort is required.

The post didn’t offer an analysis of cases where we give up, but I read it with the implicit assumption that we give up when something no longer looks worth the next doubling of effort. It still seemed like a reasonable heuristic. If at the beginning I think it’s probable I’ll want to put in the effort to actually accomplish the thing, but I want to put in minimal effort, then the doubling strategy will look like a reasonably good strategy to implement. (Adding the provision that we give up if it’s not worth the next doubling makes it better.)

But let’s think about how the analysis changes if we explicitly account for quitting.

Say the expected payoff of success is $P. (I’ll put everything in dollars.) We know some amount of effort$S will result in success, but we don’t know what it is. We’re assuming we have to commit to a level of effort at the beginning of each attempt; otherwise, we could just keep adding effort and stop when we succeed (or when we don’t think it’s worth continuing).

We have a subjective probability distribution over the amount of effort the task might require, but the point of an analysis like the one in the post is to get a rule of thumb that’s not dependent on this. So instead let’s think about the break-even point $Q where, if I’ve made a failed attempt costing$Q, I would no longer want to make another attempt. This can be derived from my prior: it’s the point where, if I knew the task was at least that difficult, my estimation of difficulty becomes too high. However, by only using this one piece of information from the prior, we can construct a strategy which is fairly robust to different probability distributions. And we need to use something from the prior, since the prior is how we decide whether to quit.

Obviously, $Q <$P.

It doesn’t make sense to make any attempts with more effort than $Q. So$Q will be our max effort.

But the required amount of effort might be much lower than Q. So we don’t want to just try Q straight away.

At this point, the analysis in the post can be translated to argue that we should start at some Q/​() for large-ish n, and keep doubling our effort. This gives us a relatively low upper bound for our total effort (twice Q) while also giving a nice bound on our total effort in terms of the true amount of effort required, $S (specifically, our bound is four times$S) -- meaning we will never spend too much effort on an easy task.

I admit this analysis was a bit sloppy; maybe there’s an improved heuristic if we think a bit more carefully about which amounts of effort are worthwhile, while still avoiding use of the full prior.

• The doubling strategy also has counterparts in array allocation strategies and algorithmic analysis (it’s common to double an array’s size each time it gets too large, to amortize the copying). Successive Halving, a racing algorithm, is also of interest here if you think of a portfolio of tasks instead of a single task.