The point of this post feels almost too obvious to be worth saying,
yet I doubt that it’s widely followed.
People often avoid doing projects that have a low probability of
success, even when the expected value is high. To counter this bias, I
recommend that you mentally combine many such projects into a strategy
of trying new things, and evaluate the strategy’s probability of
success.
I’ve noticed that, by my standards and on an Eliezeromorphic metric,
most people seem to require catastrophically high levels of faith in
what they’re doing in order to stick to it. By this I mean that they
would not have stuck to writing the Sequences or HPMOR or working on
AGI alignment past the first few months of real difficulty, without
assigning odds in the vicinity of 10x what I started out assigning
that the project would work.
...
But you can’t get numbers in the range of what I estimate to be
something like 70% as the required threshold before people will carry
on through bad times. “It might not work” is enough to force them to
make a great effort to continue past that 30% failure probability.
It’s not good decision theory but it seems to be how people actually
work on group projects where they are not personally madly driven to
accomplish the thing.
I expect this reluctance to work on projects with a large chance of
failure is a widespread problem for individual self-improvement
experiments.
2.
One piece of advice I got from my CFAR
workshop was to try lots of things. Their reasoning involved the
expectation that we’d repeat the things that worked, and forget the
things that didn’t work.
I’ve been hesitant to apply this advice to things that feel unlikely to
work, and I expect other people have similar reluctance.
The relevant kind of “things” are experiments that cost maybe 10 to 100
hours to try, which don’t risk much other than wasting time, and for
which I should expect on the order of a 10% chance of noticeable
long-term benefits.
Here are some examples of the kind of experiments I have in mind:
I’ve cheated slightly, by being more likely to add something to this
list if it worked for me than if it was a failure that I’d rather
forget. So my success rate with these was around 50%.
The simple practice of forgetting about the failures and mostly
repeating the successes is almost enough to cause the net value of these
experiments to be positive. More importantly, I kept the costs of these
experiments low, so the benefits of the top few outweighed the costs of
the failures by a large factor.
3.
I face a similar situation when I’m investing.
The probability that I’ll make any profit on a given investment is close
to 50%, and the probability of beating the market on a given investment
is lower. I don’t calculate actual numbers for that, because doing so
would be more likely to bias me than to help me.
I would find it rather discouraging to evaluate each investment
separately. Doing so would focus my attention on the fact that any
individual result is indistinguishable from luck.
Instead, I focus my evaluations much more on bundles of hundreds of
trades, often associated with a particular strategy. Aggregating
evidence in that manner smooths out the good and bad luck to make my
skill (or lack thereof) more conspicuous. I’m focusing in this post not
on the logical interpretation of evidence, but on how the subconscious
parts of my mind react. This mental bundling of tasks is particularly
important for my subconscious impressions of whether I’m being
productive.
I believe this is a well-known insight (possibly from poker?), but I
can’t figure out where I’ve seen it described.
I’ve partly applied this approach to self-improvement tasks (not quite
as explicitly as I ought to), and it has probably helped.
Bundle your Experiments
Link post
The point of this post feels almost too obvious to be worth saying, yet I doubt that it’s widely followed.
People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy’s probability of success.
1.
Eliezer says in On Doing the Improbable:
I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.
2.
One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we’d repeat the things that worked, and forget the things that didn’t work.
I’ve been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.
The relevant kind of “things” are experiments that cost maybe 10 to 100 hours to try, which don’t risk much other than wasting time, and for which I should expect on the order of a 10% chance of noticeable long-term benefits.
Here are some examples of the kind of experiments I have in mind:
gratitude journal
morning pages
meditation
vitamin D supplements
folate supplements
a low carb diet
the Plant Paradox diet
an anti-anxiety drug
ashwaghanda
whole fruit coffee extract
piracetam
phenibut
modafinil
a circling workshop
Auditory Integration Training
various self-help books
yoga
sensory deprivation chamber
I’ve cheated slightly, by being more likely to add something to this list if it worked for me than if it was a failure that I’d rather forget. So my success rate with these was around 50%.
The simple practice of forgetting about the failures and mostly repeating the successes is almost enough to cause the net value of these experiments to be positive. More importantly, I kept the costs of these experiments low, so the benefits of the top few outweighed the costs of the failures by a large factor.
3.
I face a similar situation when I’m investing.
The probability that I’ll make any profit on a given investment is close to 50%, and the probability of beating the market on a given investment is lower. I don’t calculate actual numbers for that, because doing so would be more likely to bias me than to help me.
I would find it rather discouraging to evaluate each investment separately. Doing so would focus my attention on the fact that any individual result is indistinguishable from luck.
Instead, I focus my evaluations much more on bundles of hundreds of trades, often associated with a particular strategy. Aggregating evidence in that manner smooths out the good and bad luck to make my skill (or lack thereof) more conspicuous. I’m focusing in this post not on the logical interpretation of evidence, but on how the subconscious parts of my mind react. This mental bundling of tasks is particularly important for my subconscious impressions of whether I’m being productive.
I believe this is a well-known insight (possibly from poker?), but I can’t figure out where I’ve seen it described.
I’ve partly applied this approach to self-improvement tasks (not quite as explicitly as I ought to), and it has probably helped.