There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.
I currently believe that the factor that explains most of this remaining variance is “paranoia”. In-particular the kind of paranoia that becomes more adaptive as your environment gets filled with more competent adversaries. While I am undoubtedly not going to succeed at fully conveying why I believe this, I hope to at least give an introduction into some of the concepts I use to think about it.
I don’t know if this was intended, but up until the end I was reading this post thinking you meant in this paragraph that the variance is explained by people not being paranoid enough or not paranoid in the right way and that is why you explain in this post how to be paranoid properly.
I do also think that people suck at being paranoid in the right way, but it’s a tricky problem.
I am hoping to write more about how to be paranoid in the right way, or avoid paranoia-inducing environments (my post yesterday was pointing at one such thing), but it’s something I am somewhat less confident in than the basic dynamics.
I don’t know if this was intended, but up until the end I was reading this post thinking you meant in this paragraph that the variance is explained by people not being paranoid enough or not paranoid in the right way and that is why you explain in this post how to be paranoid properly.
I do also think that people suck at being paranoid in the right way, but it’s a tricky problem.
I am hoping to write more about how to be paranoid in the right way, or avoid paranoia-inducing environments (my post yesterday was pointing at one such thing), but it’s something I am somewhat less confident in than the basic dynamics.