Consider that I have carefully thought about this for a long time, and that I’m not going to completely override my reasoning and ape the heuristic “if internet stranger thinks I’m in a cult then I’m in a cult.”
that was not the heuristic I referred to. More like “what is a reference class of cults and does this particular movement pattern-match it? What meta-level (not object-level) considerations distinguish it from the rest of the reference class?” I assume that you “have carefully thought about this for a long time”, and have reasonably good answers, whatever they are.
Humans continuously pick their own training data and generally aren’t especially aware of the implicit bias this causes and consequent attractor dynamics. This could be the only bias that really matters strongly, and ironically it is not especially recognized in the one community supposedly especially concerned about cognitive biases.
Debates of “who’s in what reference class” tend to waste arbitrary amounts of time while going nowhere. A more helpful framing of your question might be “given that you’re participating in a community that culturally reinforces this idea, are you sure you’ve fully accounted for confirmation bias and groupthink in your views on AI risk?”. To me, LessWrong does not look like a cult, but that does not imply that it’s immune to various epistemological problems like groupthink.
Consider that I have carefully thought about this for a long time, and that I’m not going to completely override my reasoning and ape the heuristic “if internet stranger thinks I’m in a cult then I’m in a cult.”
that was not the heuristic I referred to. More like “what is a reference class of cults and does this particular movement pattern-match it? What meta-level (not object-level) considerations distinguish it from the rest of the reference class?” I assume that you “have carefully thought about this for a long time”, and have reasonably good answers, whatever they are.
Humans continuously pick their own training data and generally aren’t especially aware of the implicit bias this causes and consequent attractor dynamics. This could be the only bias that really matters strongly, and ironically it is not especially recognized in the one community supposedly especially concerned about cognitive biases.
Debates of “who’s in what reference class” tend to waste arbitrary amounts of time while going nowhere. A more helpful framing of your question might be “given that you’re participating in a community that culturally reinforces this idea, are you sure you’ve fully accounted for confirmation bias and groupthink in your views on AI risk?”. To me, LessWrong does not look like a cult, but that does not imply that it’s immune to various epistemological problems like groupthink.