Key Decision Analysis—a fundamental rationality technique
This post is signal-boosting and recommending a strategy for improving your decision making that I picked up from the entrepreneur Ivan Mazour. He describes the process here, and publishes his own results every year on his blog.
In his words...
I believe that life is far too fast-paced for us to be able to make rational, carefully thought-out, decisions all the time. This completely contradicts my mathematical upbringing and training, but is something I have come to realise throughout my twenties. We need a way of keeping up with the constant barrage of decisions, even when the inevitable ‘decision fatigue’ sets in. The only way to do this, I find, is to act on instinct, but this only works if your instincts are correct. They cannot be correct all the time, of course, but if we can maximise the chance of making the right decision by instinct, then we have a strategy for coping with a complicated and highly productive life.
To sharpen my instincts, I keep a monthly journal of all key decisions which I make – decisions that could be truly life changing – and my instinctive reasons for why I made them. I go back only after exactly a year has passed, and I note down whether the decision was correct, and more importantly whether my instincts were right. At the end of the year, I go over all twelve months worth of notes, and search for any patterns amongst all of the right and wrong choices.
This is not a short-term strategy, as you can tell. In fact it takes exactly two years from the day you start following it, to the time that you can get some useful insights to sharpen your instincts. Keeping a diary of decisions has other uses, and there are many ways of getting an overview of your life prior to this, but it is only after the two years have passed that a genuine clear pattern presents itself.
This accords with some abstract theory about human rationality. A perfect-Bayesian expected utility maximizer doesn’t start out with an optimal policy. Rather, its strength is being able to learn from its experience (optimally), so that it converges towards the optimal policy.
Of course, humans have a number of limitations standing between us and perfect-decision-making-in-the-limit. Due to computational constraints, perfect Bayesian updating is out of reach. But among a number of limitations, the first and most fundamental consideration is “are you learning from your data at all?”.
If the consequences of your decisions don’t propagate back to the process(es) that you use to makes decisions, then that decision process isn’t going to improve.
And I think that, by default, I mostly don’t learn from my own experience, for a couple of reasons.
Reflection isn’t automatic, I’m likely to make many decisions, important and unimportant, without every going back to check how they turned out, especially on long timescales.
With hindsight bias and whatnot, I can’t trust myself to remember why I made a decision, and how I was thinking about it at the time, after I’ve seen how it turned out.
In general, each situation is treated as an isolated incident, instead examining at the level of my heuristics (i.e. the level of my decision making apparatus).
So I need some process, that involves writing things down, that allows me to intentionally implement back-propagation.
I only started logging my decisions a little more than year ago, and did the analysis for the end of 2018 this week, so I don’t have that much personal experience to share. I’m sharing anyway, because it will be years until I have lots of experience with this technique.
I’ve been logging very big decisions (“should I abandon X project?”) along with small decisions (“Some friends (y, z) just asked me if I want to go out to dinner with them. Should I join them, or keep working?”). In some situations, I get feedback about whether what I did was the right choice or not, pretty much immediately, in which case I’ll log that too, so that I can draw out heuristics later.
I’ve also been logging my mistakes (“I put a lot of effort into setting things up so that I could work on the plane, and then my laptop ran out of battery in the first hour.”).
Overall, I didn’t log enough over the past year, such that my document is sparser than I think it could have been. I averaged 2 to 4 entries a month, but I think I could have had 5-10 a month. From looking over what I do have, I can feel how having more entries would have been useful. So even given the bullet points above, I think my conception of what counts as a “decision” was too strict.
Relatedly, making logging low-friction seems important. This year, I’m going to implement this in Roam, using #[[decision]] tag, and integrate this into my existing daily / weekly review system.
Even given the issues I described above, I found the assessment activity to be extremely useful. There were some places where I was able to highlight “past Eli was flat-out wrong”, and others where, having seen how things turned out, I could outline nuanced heuristics that take into account the right considerations in the right circumstances.
It also clearly affirmed two principles / hamming problems, that had occurred to me before, but hadn’t really slapped my in the face. This was helpful for realizing that “my tendency to X is preemptively destroying most of the value I might create”, which is an important thing to get to full conscious attention.