I also got a Fatebook account thanks to this post.
This post lays out a bunch of tools that address what I’ve previously found lacking in personal forecasts, so thanks for the post! I’ve definitely gone observables-first, forecasted primarily the external world (rather than e.g. “if I do X, will I afterwards think it was a good thing to do?”), and have had the issue of feeling vaguely neutral about everything you touched on in Frame 3.
I’ll now be trying these techniques out and see whether that helps.
...and as I wrote that sentence, I came to think about how Humans are not automatically strategic—particularly that we do not “ask ourselves what we’re trying to achieve” and “ask ourselves how we could tell if we achieved it”—and that this is precisely the type of thing you were using Fatebook for in this post. So, I actually sat down, thought about it and made a few forecasts:
So far I’ve made a couple of forecasts of the form “if I go to event X, will I think it was clearly worth it” that already resolved, and felt like I got useful data points to calibrate my expectations on.
I didn’t hold into it after some life-changes. If I’m trying to think why, I get:
I’ve powered my forecasting/calibration training motivation by enjoying the fun quantitative nature of it, as opposed to motivation due to getting real value out of it. This only works temporarily.
I think I started to get a tiny bit of real value after reading your post on how to do it better, but then due to changed circumstances the same approaches for getting value didn’t work.
The Fatebook UI/UX had some spots that increased friction. (I wrote something about this at the time, but don’t remember the specifics.)
I think I’m now in a situation where there’s much more value on the table from being better at predicting and planning. My main crux is whether the fluent, cruxy predictions approach is an effective way of turning cognitive work into better results: I undertake large amounts of cognitive work already (including a lot which is operationalisation-shaped/turning-vague-confusing-things-into-concrete-clear-things shaped), and it could be that incorporating explicit predictions into that imposes too much friction with little benefit.
(At the same time, I have this thought that surely this is a good approach, surely you want to have explicit predictions and feedback on them. I made a prediction now about whether, if I try an approach like this for a week, whether I feel excited about continuing with it longer-term. I didn’t put a high probability on that, given my past difficulties in holding the habit and getting value out of it, but there it is.)
This comment led me to realize there really needed to be a whole separate post just focused on “fluent cruxfinding”, since that’s a necessary step in Fluent Cruxy Predictions, and, it’s the step that’s more likely to immediately pay off.
I also got a Fatebook account thanks to this post.
This post lays out a bunch of tools that address what I’ve previously found lacking in personal forecasts, so thanks for the post! I’ve definitely gone observables-first, forecasted primarily the external world (rather than e.g. “if I do X, will I afterwards think it was a good thing to do?”), and have had the issue of feeling vaguely neutral about everything you touched on in Frame 3.
I’ll now be trying these techniques out and see whether that helps.
...and as I wrote that sentence, I came to think about how Humans are not automatically strategic—particularly that we do not “ask ourselves what we’re trying to achieve” and “ask ourselves how we could tell if we achieved it”—and that this is precisely the type of thing you were using Fatebook for in this post. So, I actually sat down, thought about it and made a few forecasts:
⚖ Two months from now, will I think I’m clearly better at operationalizing cruxy predictions about my future mental state? (Olli Järviniemi: 80%)
⚖ Two months from now, will I think my “inner simulator” makes majorly less in-hindsight-blatantly-obvious mistakes? (Olli Järviniemi: 60%)
Two months from now, will I be regularly predicting things relevant to my long-term goals and think this provides value? (Olli Järviniemi: 25%)
And noticing that making these forecasts was cognitively heavy and not fluent at all, I made one more forecast:
⚖ Two months from now, will I be able to fluently use forecasting as a part of my workflow? (Olli Järviniemi: 20%)
So far I’ve made a couple of forecasts of the form “if I go to event X, will I think it was clearly worth it” that already resolved, and felt like I got useful data points to calibrate my expectations on.
Olli I’m curious how this went for you 1.5 years later.
I didn’t hold into it after some life-changes. If I’m trying to think why, I get:
I’ve powered my forecasting/calibration training motivation by enjoying the fun quantitative nature of it, as opposed to motivation due to getting real value out of it. This only works temporarily.
I think I started to get a tiny bit of real value after reading your post on how to do it better, but then due to changed circumstances the same approaches for getting value didn’t work.
The Fatebook UI/UX had some spots that increased friction. (I wrote something about this at the time, but don’t remember the specifics.)
I think I’m now in a situation where there’s much more value on the table from being better at predicting and planning. My main crux is whether the fluent, cruxy predictions approach is an effective way of turning cognitive work into better results: I undertake large amounts of cognitive work already (including a lot which is operationalisation-shaped/turning-vague-confusing-things-into-concrete-clear-things shaped), and it could be that incorporating explicit predictions into that imposes too much friction with little benefit.
(At the same time, I have this thought that surely this is a good approach, surely you want to have explicit predictions and feedback on them. I made a prediction now about whether, if I try an approach like this for a week, whether I feel excited about continuing with it longer-term. I didn’t put a high probability on that, given my past difficulties in holding the habit and getting value out of it, but there it is.)
This comment led me to realize there really needed to be a whole separate post just focused on “fluent cruxfinding”, since that’s a necessary step in Fluent Cruxy Predictions, and, it’s the step that’s more likely to immediately pay off.
Here’s that post:
https://www.lesswrong.com/posts/wkDdQrBxoGLqPWh2P/finding-cruxes-help-reality-punch-you-in-the-face
Woo, great. :)
Whether this works out or not for you, I quite appreciate you laying out the details. Hope it’s useful for you!