It was a group of rather committed and also individually competent rationalists, but they quickly came to the conclusion that while they could put in the effort to become much better at forecasting, the actual skills they’d learn would be highly specific to the task of winning points in prediction tasks, and they abandoned the project, concluding that it would not meaningfully improve their general capability to accomplish things!!
What you (can) learn from something might not be obvious in advance. While it’s possible they were right, it’s possible they were wrong.
And if you’re right, then doing the thing is a waste, but if you are wrong then it’s not.*
*Technically the benefit of something can equal the cost.
U(x) = Benefit—Cost. The first is probabilistic—in the mind, if not in the world. (The second may be as well, but to lesser extent.)
If this is instead modeled using a binary variable ‘really good (RG)’, the expected utility of x is roughly:
Outcome_RG*p_RG + Outcome_not*(1-p_RG) - cost
But this supposes that the action is done or not done, ignoring continuity. You to superforecaster you, is a continuum. If this is broken up into into intervals of hours then there may exist both a number of hours x and y, such that U(x)-cost >0, but U(y)-cost < 0. The continuous generalization is the derivative of ‘U(x hours) - cost’, and it becomes zero where the utility has stopped increasing and started increasing (or when the reverse holds). This leaves the question of how U(x) is calculated, or estimated. One might imagine that this group could have been right—perhaps the low hanging of fruit of forecasting/planning is Fermi estimates, and they already had that skill/tool.
Forecasting (predicting the future) is all well and good if you can’t affect something, but if you can then perhaps planning (creating the desired future) is better. The first counterexample that comes to mind is that if you could predict the stock market in advance, then you might be able to make money off of that. This example seems unlikely, but it suggests a relationship between the two—some information about the future is useful for ‘making plans’. However, while part of what information that will/could be important in the future may be obvious, that leaves:
how to forecast information about the future that’s obviously useful (if the forecast is correct)
the information that’s not obviously useful, but turns out to be important later (This is usually lumped under ‘unknown unknowns’, but while Moravec’s paradox** can be cast as an unknown unknown, the fact that no one had built a machine/robot that did x yet, could be considered known.)
Since one can’t do most of the things in the world for oneself, expert judgement has to be one of the upstream skills chosen for investment/cultivation.
The TL:DR comment on this is also the conclusion.
What you (can) learn from something might not be obvious in advance. While it’s possible they were right, it’s possible they were wrong.
And if you’re right, then doing the thing is a waste, but if you are wrong then it’s not.*
*Technically the benefit of something can equal the cost.
U(x) = Benefit—Cost. The first is probabilistic—in the mind, if not in the world. (The second may be as well, but to lesser extent.)
If this is instead modeled using a binary variable ‘really good (RG)’, the expected utility of x is roughly:
Outcome_RG*p_RG + Outcome_not*(1-p_RG) - cost
But this supposes that the action is done or not done, ignoring continuity. You to superforecaster you, is a continuum. If this is broken up into into intervals of hours then there may exist both a number of hours x and y, such that U(x)-cost >0, but U(y)-cost < 0. The continuous generalization is the derivative of ‘U(x hours) - cost’, and it becomes zero where the utility has stopped increasing and started increasing (or when the reverse holds). This leaves the question of how U(x) is calculated, or estimated. One might imagine that this group could have been right—perhaps the low hanging of fruit of forecasting/planning is Fermi estimates, and they already had that skill/tool.
Forecasting (predicting the future) is all well and good if you can’t affect something, but if you can then perhaps planning (creating the desired future) is better. The first counterexample that comes to mind is that if you could predict the stock market in advance, then you might be able to make money off of that. This example seems unlikely, but it suggests a relationship between the two—some information about the future is useful for ‘making plans’. However, while part of what information that will/could be important in the future may be obvious, that leaves:
how to forecast information about the future that’s obviously useful (if the forecast is correct)
the information that’s not obviously useful, but turns out to be important later (This is usually lumped under ‘unknown unknowns’, but while Moravec’s paradox** can be cast as an unknown unknown, the fact that no one had built a machine/robot that did x yet, could be considered known.)
**Moving is harder than calculating.
Since one can’t do most of the things in the world for oneself, expert judgement has to be one of the upstream skills chosen for investment/cultivation.
TL:DR;
And while prediction may be a skill, even if a project ‘fails’ it can still build skills/knowledge. On that note:
What skills/tools/etc. will (obviously) be useful in the future? and
What should be done about skills/tools/etc. that aren’t obviously useful in the future now, but will be with hindsight?