That depends a lot on what “Less Wrong rationality” is understood to denote.
There’s a lot of stuff here I recognized as mainstream cogsci when I read it. There’s other stuff that I don’t consider mainstream cogsci (e.g. cryonics advocacy, MWI advocacy, confident predictions of FOOMing AI).
There’s other stuff that drifts in between (e.g., the meta-ethics stuff is embedded in a fairly conventional framework, but comes to conclusions that are not clearly conventional.… though at times this seems more a fact about presentation than content).
I can accept the idea that some of that stuff is central to “LW rationality” and some of it isn’t, but it’s not at all obvious where one would draw the line.
Things about local rational behaviour look and feel OK and can be considered “LW rationality” in the meta sense. Comparison to cogsci seems to imply these parts.
I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).
And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.
So no, it’s not the methodology of rational decision making that is a problem.
That depends a lot on what “Less Wrong rationality” is understood to denote.
There’s a lot of stuff here I recognized as mainstream cogsci when I read it.
There’s other stuff that I don’t consider mainstream cogsci (e.g. cryonics advocacy, MWI advocacy, confident predictions of FOOMing AI). There’s other stuff that drifts in between (e.g., the meta-ethics stuff is embedded in a fairly conventional framework, but comes to conclusions that are not clearly conventional.… though at times this seems more a fact about presentation than content).
I can accept the idea that some of that stuff is central to “LW rationality” and some of it isn’t, but it’s not at all obvious where one would draw the line.
Things about local rational behaviour look and feel OK and can be considered “LW rationality” in the meta sense. Comparison to cogsci seems to imply these parts.
I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).
And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.
So no, it’s not the methodology of rational decision making that is a problem.