The separation I’m hoping to make is between banning him because “we know he committed sex crimes” and banning him because “he’s promoting reasoning styles in a way we think is predatory.” We do not know the first to the standard of legal evidence; ialdabaoth has not been convicted in a court of law, and while I think the community investigations were adequate for the questions of whether or not he should be allowed at particular events or clubs, my sense is that his exile was decided by amorphous community-wide processing in a way that I’m reluctant to extend further.
I’m making the additional claim that “The Bay Area community has a rough consensus to kick this guy out” by itself does not meet my bar for banning someone from LessWrong, given the different dynamics of in-person and online interactions. (As a trivial example, suppose someone’s body has a persistent and horrible smell; it could easily be the utilitarian move to not allow them at any physical meetups while giving them full freedom to participate online.) I think this is the bit Tenoke is finding hardest to swallow; it’s one thing to say “yep, this guy is exiled and we’re following the herd” and another thing to say “we’ve exercised independent judgment here, despite obvious pressures to conform.” The latter is a more surprising claim, and correspondingly would require more evidence.
and (2) the reason for taking it seriously is “the personal stuff”.
I think this is indirectly true. That is, there’s a separation between expected harm and actual harm, and I’m trying to implement procedures that reduce expected harm. Consider the difference between punishing people for driving drunk and just punishing people for crashing. It’s one thing to just wait until someone accumulates a cloud of ‘unfortunate events’ around them that leads to them finally losing their last defenders, and another to take active steps to assess risks and reduce them. Note that this requires a good model of how ‘drunkenness’ leads to ‘crashes’, and I do not see us as having presented a convincing model of that in this case.
Of course, this post isn’t an example of that; as mentioned, this post is years late, and the real test of whether we can do the equivalent of punishing people for driving drunk is whether we can do anything about people currently causing problems [in expectation]. But my hope is that this community slowly moves from a world where ‘concerns about X’ are published years after they’ve become mutual knowledge among people in the know to one where corrosive forces are actively cleaned up before they make things substantially worse.
The separation I’m hoping to make is between banning him because “we know he committed sex crimes” and banning him because “he’s promoting reasoning styles in a way we think is predatory.” We do not know the first to the standard of legal evidence; ialdabaoth has not been convicted in a court of law, and while I think the community investigations were adequate for the questions of whether or not he should be allowed at particular events or clubs, my sense is that his exile was decided by amorphous community-wide processing in a way that I’m reluctant to extend further.
I’m making the additional claim that “The Bay Area community has a rough consensus to kick this guy out” by itself does not meet my bar for banning someone from LessWrong, given the different dynamics of in-person and online interactions. (As a trivial example, suppose someone’s body has a persistent and horrible smell; it could easily be the utilitarian move to not allow them at any physical meetups while giving them full freedom to participate online.) I think this is the bit Tenoke is finding hardest to swallow; it’s one thing to say “yep, this guy is exiled and we’re following the herd” and another thing to say “we’ve exercised independent judgment here, despite obvious pressures to conform.” The latter is a more surprising claim, and correspondingly would require more evidence.
I think this is indirectly true. That is, there’s a separation between expected harm and actual harm, and I’m trying to implement procedures that reduce expected harm. Consider the difference between punishing people for driving drunk and just punishing people for crashing. It’s one thing to just wait until someone accumulates a cloud of ‘unfortunate events’ around them that leads to them finally losing their last defenders, and another to take active steps to assess risks and reduce them. Note that this requires a good model of how ‘drunkenness’ leads to ‘crashes’, and I do not see us as having presented a convincing model of that in this case.
Of course, this post isn’t an example of that; as mentioned, this post is years late, and the real test of whether we can do the equivalent of punishing people for driving drunk is whether we can do anything about people currently causing problems [in expectation]. But my hope is that this community slowly moves from a world where ‘concerns about X’ are published years after they’ve become mutual knowledge among people in the know to one where corrosive forces are actively cleaned up before they make things substantially worse.