Getting oriented fast in complex/messy real world situations in fields in which you are not an expert
For example, now, one topic to get oriented in would be COVID; I think for a good thinker, it should be achievable to have big-picture understanding of the situation comparable to a median epidemiologist after few days of research
Where the point isn’t to get an accurate forecast of some global variable which is asked on metaculus, but gears-level model of what’s going on / what are the current ‘critical points’ which will have outsized impact / …
In my impression, compared to some of the ‘LessWrong-style rationality’, this is more heavily dependent on ‘doing bounded rationality well’ - that is, finding the most important bits / efficiently ignoring almost all information, in contrast to carefully weighting several hypothesis which you already have
Actually trying to change something in the world where the system you are interacting with has significant level of complexity & somewhat fast feedback loop (&it’s not super-high-stakes)
Few examples of seemingly stupid things of this type I did
filled a lawsuit without the aid of a lawyer (in low-stakes case)
repaired various devices with value much lower than value of my time
tinkering with code in a language I don’t know
trying to moderate Wikipedia article on highly controversial topic about which two groups of editors are fighting
One thing I’m a bit worried about in some versions of LW rationality & someone should write a post about is something like … ’missing opportunities to actually fight in non-super-high-stakes matters″, in the martial arts metaphor.
Getting oriented fast in complex/messy real world situations in fields in which you are not an expert
For example, now, one topic to get oriented in would be COVID; I think for a good thinker, it should be achievable to have big-picture understanding of the situation comparable to a median epidemiologist after few days of research
Where the point isn’t to get an accurate forecast of some global variable which is asked on metaculus, but gears-level model of what’s going on / what are the current ‘critical points’ which will have outsized impact / …
In my impression, compared to some of the ‘LessWrong-style rationality’, this is more heavily dependent on ‘doing bounded rationality well’ - that is, finding the most important bits / efficiently ignoring almost all information, in contrast to carefully weighting several hypothesis which you already have
Actually trying to change something in the world where the system you are interacting with has significant level of complexity & somewhat fast feedback loop (&it’s not super-high-stakes)
Few examples of seemingly stupid things of this type I did
filled a lawsuit without the aid of a lawyer (in low-stakes case)
repaired various devices with value much lower than value of my time
tinkering with code in a language I don’t know
trying to moderate Wikipedia article on highly controversial topic about which two groups of editors are fighting
One thing I’m a bit worried about in some versions of LW rationality & someone should write a post about is something like … ’missing opportunities to actually fight in non-super-high-stakes matters″, in the martial arts metaphor.