Things about local rational behaviour look and feel OK and can be considered “LW rationality” in the meta sense. Comparison to cogsci seems to imply these parts.
I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).
And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.
So no, it’s not the methodology of rational decision making that is a problem.
Things about local rational behaviour look and feel OK and can be considered “LW rationality” in the meta sense. Comparison to cogsci seems to imply these parts.
I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).
And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.
So no, it’s not the methodology of rational decision making that is a problem.