You say that there is lots of outright hostility to anything against x-risks and human misery, except if it’s MIRI.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I’m offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that’s just IRC—but I left and haven’t gone back.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I’m offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that’s just IRC—but I left and haven’t gone back.