You say that there is lots of outright hostility to anything against x-risks and human misery, except if it’s MIRI.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I’m offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that’s just IRC—but I left and haven’t gone back.
actually would love to see you writing articles on all your theses here, on LW. LW-critical articles were already promoted a few times, including Yvain’s article, so it’s not like LW is criticism-intolerant.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I’m offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that’s just IRC—but I left and haven’t gone back.
Things have changed, believe me.
I don’t know exactly what process generates the featured articles, but I don’t think it has much to do with the community’s current preoccupations.
My point was that it has become a lot more tolerant.
Maybe, but the core beliefs and cultural biases haven’t changed, in the years that I’ve been here.
But you didn’t get karmassinated or called an idiot.
This is true. I did not expect the overwhelmingly positive response I got...
See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/