A year later: If you’re going to do predictions, it’s obviously IMO better if they are based around “what would change your decisions?”. (Otherwise, this is more like a random hobby than a useful rationality skill)
And, it’s still true that it’s way better to be fluent, than not-fluent, for the reasons I laid out in this post. (I.e. you can quickly interweave it into your existing planmaking process, instead of clunkily trying to set aside time for prediction)
The question is “is it worth the effort of getting fluent?”
When I first started writing this review, I found myself shrugging sadly and thinking “well, I didn’t really turn this habit into a clearly useful skill.” But, then, I immediately found myself booting up the skills here for a current project, where I’d been sort of myopically following the trail of obvious next actions without asking “will anyone use this project a year from now?”.
And then the skills from this post came tumbling out. “Well, no, by default, they won’t use this project.” “In the worlds where they did use the project, what sort of things would happen in the meanwhile?” and then I generated some more specific predictions.
This is maybe all just saying Murphyjitsu is useful, and then the question is “does the value-add of layering things through a Fatebook oriented process help?”
Reducing friction for “context”
One bottleneck I’ve found on Fatebook is, if you want to write them quickly you don’t want to have to write down a whole lotta context. But, then, for longerterm predictions, it’ll be hard to figure out “was this true in the way that I cared about at the time?” because I don’t remember exactly why I cared about it.
For any of this to be useful, we need to either make it much easier to learn the skills to fluency, or, avoid having to learn a new skill at all. An idea I just had to solve that is “have LLMs generate the surrounding context” based on whatever you were just working on when you made the prediction.
(and then I just had an LLM generate the context based on this comment, although, lol, I guess it’s not actually better than just including the whole comment verbatim in this case)
A year later: If you’re going to do predictions, it’s obviously IMO better if they are based around “what would change your decisions?”. (Otherwise, this is more like a random hobby than a useful rationality skill)
And, it’s still true that it’s way better to be fluent, than not-fluent, for the reasons I laid out in this post. (I.e. you can quickly interweave it into your existing planmaking process, instead of clunkily trying to set aside time for prediction)
The question is “is it worth the effort of getting fluent?”
When I first started writing this review, I found myself shrugging sadly and thinking “well, I didn’t really turn this habit into a clearly useful skill.” But, then, I immediately found myself booting up the skills here for a current project, where I’d been sort of myopically following the trail of obvious next actions without asking “will anyone use this project a year from now?”.
And then the skills from this post came tumbling out. “Well, no, by default, they won’t use this project.” “In the worlds where they did use the project, what sort of things would happen in the meanwhile?” and then I generated some more specific predictions.
This is maybe all just saying Murphyjitsu is useful, and then the question is “does the value-add of layering things through a Fatebook oriented process help?”
Reducing friction for “context”
One bottleneck I’ve found on Fatebook is, if you want to write them quickly you don’t want to have to write down a whole lotta context. But, then, for longerterm predictions, it’ll be hard to figure out “was this true in the way that I cared about at the time?” because I don’t remember exactly why I cared about it.
For any of this to be useful, we need to either make it much easier to learn the skills to fluency, or, avoid having to learn a new skill at all. An idea I just had to solve that is “have LLMs generate the surrounding context” based on whatever you were just working on when you made the prediction.
A year from now, in the prior 2 weeks, will I have found it useful to have LLM-generated context for a prediction, for evaluating it? (20%)
(and then I just had an LLM generate the context based on this comment, although, lol, I guess it’s not actually better than just including the whole comment verbatim in this case)