Sure, but the fact that the probability distribution is skewed in favor of simpler (i.e. more “beautiful”) explanations by Occam’s Razor is equivalent to saying that there should be such a bias—after all, bias is essentially just a skewing of one’s probability function. Of course this bias shouldn’t be taken to the extreme of assuming that just because one hypothesis is more beautiful than others, it automatically qualifies as the correct explanation. But discrediting such an extreme mindset doesn’t mean that a mild bias in favor of “beauty” is discredited.
A lot of excellent points in this post. I particularly like the one about having feelings about one’s own feelings, which is something I think I’ve always understood on some level but perhaps the first time I thought about it consciously was from Seth Rogan’s line in 40-Year-Old Virgin: “[Y]our depression is boring me for one thing, and it’s actually making me a little depressed, which is then in turn making me more depressed that you’re actually affecting my mood.”
I think posts like this prove a more general point: we humans are thinking on many levels of meta, all the time, and insistence on pulling the levels apart to examine separately isn’t “complicating things”; it’s just a way of seeing more clearly what’s already in our line of vision.
I was hoping there would be some kind of welcome thread for doing this, but I suppose an open thread is as good a place as any to introduce myself. I call myself Liskantope online, in real life I’m a postdoctoral researcher in pure math, but I like to blog on Wordpress and Tumblr under this handle. I identify loosely with the online rationalist movement and would like to become more involved in some ways, including participating at least a little on LessWrong. I don’t have much background in either AI or EA stuff, but I’d like to absorb more knowledge or at least intuition in these areas.
An accomplishment of the past month that I’m celebrating? I spoke at a conference and it went really well, not often I get the chance to do that and I hope giving that talk has set off a positive feedback loop where I will get more recognition which leads to more speaking opportunities. Success in academia relies too much on self-perpetuating recognition or lack thereof for my liking.
[Medium-time semi-consistent lurker, first-time commenter]
As a mathematician (but with no background in AI), I’m completely on board with this conception of “mathematical mindset” and the importance of reframing definitions in accordance with whatever will allow us to think more clearly. I’m a bit bemused by the development of this idea in the post, though, where it seems to be applied both as a defining aspect of “mathematical mindset” for approaching a discipline such as AI, and in a meta-level use for redefining different types of mindsets.