I didn’t hold into it after some life-changes. If I’m trying to think why, I get:
I’ve powered my forecasting/calibration training motivation by enjoying the fun quantitative nature of it, as opposed to motivation due to getting real value out of it. This only works temporarily.
I think I started to get a tiny bit of real value after reading your post on how to do it better, but then due to changed circumstances the same approaches for getting value didn’t work.
The Fatebook UI/UX had some spots that increased friction. (I wrote something about this at the time, but don’t remember the specifics.)
I think I’m now in a situation where there’s much more value on the table from being better at predicting and planning. My main crux is whether the fluent, cruxy predictions approach is an effective way of turning cognitive work into better results: I undertake large amounts of cognitive work already (including a lot which is operationalisation-shaped/turning-vague-confusing-things-into-concrete-clear-things shaped), and it could be that incorporating explicit predictions into that imposes too much friction with little benefit.
(At the same time, I have this thought that surely this is a good approach, surely you want to have explicit predictions and feedback on them. I made a prediction now about whether, if I try an approach like this for a week, whether I feel excited about continuing with it longer-term. I didn’t put a high probability on that, given my past difficulties in holding the habit and getting value out of it, but there it is.)
This comment led me to realize there really needed to be a whole separate post just focused on “fluent cruxfinding”, since that’s a necessary step in Fluent Cruxy Predictions, and, it’s the step that’s more likely to immediately pay off.
Olli I’m curious how this went for you 1.5 years later.
I didn’t hold into it after some life-changes. If I’m trying to think why, I get:
I’ve powered my forecasting/calibration training motivation by enjoying the fun quantitative nature of it, as opposed to motivation due to getting real value out of it. This only works temporarily.
I think I started to get a tiny bit of real value after reading your post on how to do it better, but then due to changed circumstances the same approaches for getting value didn’t work.
The Fatebook UI/UX had some spots that increased friction. (I wrote something about this at the time, but don’t remember the specifics.)
I think I’m now in a situation where there’s much more value on the table from being better at predicting and planning. My main crux is whether the fluent, cruxy predictions approach is an effective way of turning cognitive work into better results: I undertake large amounts of cognitive work already (including a lot which is operationalisation-shaped/turning-vague-confusing-things-into-concrete-clear-things shaped), and it could be that incorporating explicit predictions into that imposes too much friction with little benefit.
(At the same time, I have this thought that surely this is a good approach, surely you want to have explicit predictions and feedback on them. I made a prediction now about whether, if I try an approach like this for a week, whether I feel excited about continuing with it longer-term. I didn’t put a high probability on that, given my past difficulties in holding the habit and getting value out of it, but there it is.)
This comment led me to realize there really needed to be a whole separate post just focused on “fluent cruxfinding”, since that’s a necessary step in Fluent Cruxy Predictions, and, it’s the step that’s more likely to immediately pay off.
Here’s that post:
https://www.lesswrong.com/posts/wkDdQrBxoGLqPWh2P/finding-cruxes-help-reality-punch-you-in-the-face