A lesson from the book System Effects: Complexity in Political and Social Life by Robert Jervis, and also from the book The Trading Game: A Confession by Gary Stevenson.
When people talk about planning for the future, there is often a thought chain like this:
All other things being equal, a world with thing/organisation/project X is preferable compared to a world without thing/organisation/project X
Therefore, I should try to make X happen
I will form a theory of change and start to work at making X happen
But of course the moment you start working at making X happen you have already destroyed the premise. There are no longer two equal worlds held in expectation, one with X and one with no X. There is now the world without X (in the past), and the world where you are trying to make X happen (the present). And very often the path to attaining X creates a world much less preferable for you than the world before you started, long before you reach X itself.
For example:
I can see a lucrative trade opportunity where by the end of five months, the price for some commodity will settle at a new, higher point which I can forecast clearly. All other things being equal, if I take this trade I will make a lot of money.
Therefore, I should try and make this trade.
I will take out a large position, and double down if in the interim the price moves in the “wrong” direction.
However, the price can be much more volatile than you expect, especially if you are taking out big positions in a relatively iliquid market. Thus you may find that three months in your paper losses are so large that you reach your pain threshold and back out of the trade for fear that your original prediction was wrong. At the end of the five months, you may have predicted the price correctly, but all you did was lose a large sum of money in the interim.
For another example:
All other things being equal, a world with an awareness of potential race dynamics around AGI is preferable compared to a world without such an awareness.
Therefore, I should try to raise awareness of race dynamics.
I will write a piece about race dynamics and make my arguments very persuasive, to increase the world’s awareness of this issue.
Of course, in the process of trying to raise awareness of this issue, you might first create a world where a small subset of the population (mostly policy and AI people) are suddenly very clued-in to the possibility of the race dynamics. There people are also in a very good position to create, maintain, and capitalize on those dynamics (whether consciously or not), including using them to raise large amounts of cash. Now suddenly the risk of race dynamics is much larger than before, and the world is in a more precarious state.
There isn’t really a foolproof way to get around this problem. However, one tactic might be to look at your theory of change, and instead of comparing the world state before and after the plan, look at the world state along each step of the path to change, and consciously weigh up the changes and tradeoffs at each step. If one of those steps looks like it would break a moral, social, or pain-related threshold, maybe reconsider that theory of change.
Addendum: I think this is also why systems/ecosystems/plans which rely on establishing positive or negative feedback loops are so powerful. They’ve set things up so that each stage incrementally moves towards the goal, so that even if there are setbacks you have room to fall back instead of breaching a pain threshold.
A lesson from the book System Effects: Complexity in Political and Social Life by Robert Jervis, and also from the book The Trading Game: A Confession by Gary Stevenson.
When people talk about planning for the future, there is often a thought chain like this:
All other things being equal, a world with thing/organisation/project X is preferable compared to a world without thing/organisation/project X
Therefore, I should try to make X happen
I will form a theory of change and start to work at making X happen
But of course the moment you start working at making X happen you have already destroyed the premise. There are no longer two equal worlds held in expectation, one with X and one with no X. There is now the world without X (in the past), and the world where you are trying to make X happen (the present). And very often the path to attaining X creates a world much less preferable for you than the world before you started, long before you reach X itself.
For example:
I can see a lucrative trade opportunity where by the end of five months, the price for some commodity will settle at a new, higher point which I can forecast clearly. All other things being equal, if I take this trade I will make a lot of money.
Therefore, I should try and make this trade.
I will take out a large position, and double down if in the interim the price moves in the “wrong” direction.
However, the price can be much more volatile than you expect, especially if you are taking out big positions in a relatively iliquid market. Thus you may find that three months in your paper losses are so large that you reach your pain threshold and back out of the trade for fear that your original prediction was wrong. At the end of the five months, you may have predicted the price correctly, but all you did was lose a large sum of money in the interim.
For another example:
All other things being equal, a world with an awareness of potential race dynamics around AGI is preferable compared to a world without such an awareness.
Therefore, I should try to raise awareness of race dynamics.
I will write a piece about race dynamics and make my arguments very persuasive, to increase the world’s awareness of this issue.
Of course, in the process of trying to raise awareness of this issue, you might first create a world where a small subset of the population (mostly policy and AI people) are suddenly very clued-in to the possibility of the race dynamics. There people are also in a very good position to create, maintain, and capitalize on those dynamics (whether consciously or not), including using them to raise large amounts of cash. Now suddenly the risk of race dynamics is much larger than before, and the world is in a more precarious state.
There isn’t really a foolproof way to get around this problem. However, one tactic might be to look at your theory of change, and instead of comparing the world state before and after the plan, look at the world state along each step of the path to change, and consciously weigh up the changes and tradeoffs at each step. If one of those steps looks like it would break a moral, social, or pain-related threshold, maybe reconsider that theory of change.
Addendum: I think this is also why systems/ecosystems/plans which rely on establishing positive or negative feedback loops are so powerful. They’ve set things up so that each stage incrementally moves towards the goal, so that even if there are setbacks you have room to fall back instead of breaching a pain threshold.