This post posed the question very clearly, and laid out a bunch of interesting possible hypotheses to explain the data. I think it’s an important question for humanity and also comes up regularly in my thinking about how to help people do research on questions like AI alignment and human rationality.
This post posed the question very clearly, and laid out a bunch of interesting possible hypotheses to explain the data. I think it’s an important question for humanity and also comes up regularly in my thinking about how to help people do research on questions like AI alignment and human rationality.