Hi all,
Long time lurker, first time poster. I’ve read some of the Sequences, though I fully intend to re-read and read on.
I’m an undergrad at present, looking to participate in a trend I’ve been observing that’s bring some of the rigor and predictive power of the hard sciences to linguistics.
I’m particularly interested in how language evolved, and under what physical/biological/computational constraints; What that implies about the neural mechanisms behind human behavior; and how to use those two to construct a predictive and quantitative theory of linguistic behavior.
I go to a Liberal Arts college (I started out with a bit more of a Lit major bent), where, after being disillusioned with the somewhat more philosophical side of linguistics (mid-term, no less), I ended up taking an extracurricular dive into the physical sciences just to stay sane. Then a friend recomended HPMOR, and thence I discovered LessWrong, where I’ve been happily lurking for some time.
I decided it would be useful to actually participate. So here I am.
I think, more to the point is the question of what functions the evolutionary processes were computing. Those instincts did not evolve to provide insight into truth, they evolved to maximize reproductive fitness. Certainly these aren’t mutually exclusive goals, but to a certain extent, that difference in function is why we have cognitive biases in the first place.
Obviously that’s an over simplification, but my point is that if we know something has gone wrong, and that there’s conflict between an intelligent person’s conclusions and the intuitions we’ve evolved, the high probability that the flaw’ is in the intelligent person’s argument depends on whether that instinct in some way produced more babies than it’s competitors.
This may or may not significantly decrease the probability distribution on expected errors assigned earlier, but I think it’s worth considering.