Econ grad student here (and someone else converted away from Austrian econ in part from Caplan’s article + debate with Block). Most of economics just chugs right along with the standard rationality (instrumental rationality, not epistemic) assumptions. Not because economists actually believe humans are rational—well some do, but I digress—but largely because we can actually get answers to real world problems out of the rationality assumptions, and sometimes (though not always) these answers correspond to reality. In short, rationality is a model and economists treat it as such—it’s false, but it’s an often useful approximation of reality. The same goes for always assuming we’re in equilibrium. The trick is finding when and where the approximation isn’t good enough and what your criteria for “good enough” is.
Now, this doesn’t mean mainstream economists aren’t interested in cogsci rationality. An entire subfield of economics—Behavioral Economics—rose up in tandem with the rise of the cogsci approach to studying human decision making. In fact, Kahneman won the nobel prize in economics. AFAICT there’s a large market for economic research that applies behavioral economics to problems typically studied in classical, rational agent settings. The problem isn’t the demand side—I think economists would love to see a fully general theory of general equilibrium with more plausible agents—it’s the supply side: getting answers out of models with non-rational agents is a difficult task. It’s already hard enough with rational agents for models to be anywhere near realistic—in macro models with micro foundations, we often assume all agents are identical and all firms are identical. This may seem terribly unrealistic, but often there’s some other complication in the model that makes it hard enough to find solutions. Adding heterogenous firms and agents is an extra complication that may not add anything illuminating to the model. So, many economists treat the rationality assumptions which are fundamental to neoclassical economics similarly. If the rationality of agents within their model is tangential to the point they’re trying to make (which may only be known empirically), they’ll choose the easier assumption to work with. There are fields where the frailty of human rationality seems centrally important, and those are the fields where you’re most likely to see nonstandard rationality assumptions. Behavioral Finance is an example of one of these.
The biggest thing I would say is, don’t think in terms of “schools” of economic thought. Think in terms of models and tools. Most good ideas are eventually assimilated into the “neoclassical” economic toolkit in some form or another. And besides, thinking in terms of schools of thought is a good way to unintentionally mind-kill yourself.
As far as textbooks go, most higher level (intermediate micro and above) will present models without making any claims about when they’re a good approximation and when they aren’t. Oftentimes this is because the models being presented are actually just stepping stones to the more realistic and more complicated models economists are actually using. This is generally good, though I wish there were more empirical evidence presented. Any edition of Microeconomic Analysis by Varian will give you a good intermediate level (requires some calculus) rundown of standard micro theory. Think of it as taking standard economic intuitions (to economists—even austrians) and writing down equations that describe them so that we can talk about them precisely. I’d steer clear of any non-graduate level macro textbooks. The macro we teach undergrads is not the macro practicing macroeconomists actually believe. (Even on the graduate level, there isn’t a generally accepted class of models that economist agree on, so it might not be that useful to study modern macro). If your mathematical background is stronger, Mas-Colell, Whinston and Green’s Microeconomic Theory is a standard first year graduate micro text that’s densely packed with a lot of material. Simon and Blume’s Mathematics for Economists is the standard math primer used to prepare students for the class Mas-Colell is typically used in, if you’re unsure about your math background.
Edit: Holy mother of grammar!
My model of this situation is less sanguine than others here, though Yvain and Tetronian hinted at it: it’s identity politics. Humans very naturally associate themselves with many different groups, some of them arbitrarily defined, and often without any conscious thought. Religion, favorite sports teams, the street/neighborhood/city/state/country you live in, and many other things can be the focal point of these groups. The more you associate with one of these groups, the more its part of your identity - i.e. how you see yourself. If you associate with one of these groups particularly strongly, any action which appears to make a rival group look better will personally offend you and elicit a response.
I’m from the St. Louis area in Missouri (US), and our baseball team, the St. Louis Cardinals, has a longstanding rivalry with the Chicago Cubs, a nearby team. In the past (when the Cubs were fairly good and actually a threat), I’ve seen Cardinals and Cubs fans get into fights for no other reason than one of them insulted the other’s favorite team. I’ve heard similar stories about fans of St. Louis and Chicago’s hockey teams (another rivalry). I had a philosophy professor in undergrad who would get visibly upset at times in class when arguing against reductionism (he’s Christian), and I think we’ve all seen both religious and political debates get heated.
My model of all these situations is the same as my model of your situation. Before joining LessWrong, you spent a certain amount of identity points on “being rational,” but probably didn’t have much of a group to identify with, so when someone who’s religious or superstitious got in a jab against their hated rival, the rationalists, you didn’t feel anything or think much of it. Now that you’ve been a member of LW for some time and absorbed its memes, you’re spending many more identity points on “being rational” primarily because, I conjecture, you now can point to a large, dedicated group of like-minded people. As such, you’re much more likely to react with offense when someone brings up religion or homeopathy in a positive light, since that’s implicitly an attack on your group.
Identity actually terrifies me because of how it seems able to control my actions and even my beliefs. I remember writing a political philosophy paper in undergrad and actually thinking “but if I use this argument, then I can’t argue for Anarcho-Capitalism anymore.” If that wasn’t a red flag, I don’t know what is—though naturally I didn’t notice it as one at the time. One way to deal with this is to keep your identity small so that you minimize how often you’re swayed in one direction or another for reasons purely of identity politics. Also, crafting a particular identity for yourself can work. I try to think of myself as curious and tolerant of beliefs that I know to be crazy.
My own experience has been similar to badger’s—I’ve grown more tolerant of crazy beliefs (and beliefs that simply contradict my own) since discovering OB/LW. I can’t really be sure about why, but I’d like to think it’s because I’ve implemented the two strategies above. Learning that politics is the mind killer and realizing that this applied more broadly than groups based on political affiliation actually scared me to some extent. My immediate reaction was to reject all group affiliations (that I could anyway), but since then I’ve let some of the more innocuous ones back in because I’d rather consciously spend my identity points than let my brain subconsciously do it.