I’ll also say that censorship is a “hot button” issue for me, to the point that I’m not sure I want to continue helping SIAI. They went from nerdy-but-fun-to-talk-to/help to scary-cult-like-weirdos as soon as I read the article, and thought about what EYs reaction, and Roko’s removal meant.
I’m seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.
(Deleting this post, or the one I’m replying to, would count),
Can I just state categorically that this “ways to increase existential risks” thing is stupid and completely over the top.
We should be able to discuss things, sometimes even continuing discussion in private. We should not stoop to playing silly games of brinksmanship.
Geez, if we can’t eve avoid existential brinksmanship on the goddamn LW forum when the technology is hypothetical and the stakes are low, what hope in hell do we have when real politicians get wind of real progress in AGI?
How much experience do you have with various online communities?
I’ve found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.
I think “freedom of speech” has different connotations, and different consequences in online communities, compared to the real world. Anonimity makes a big difference, as does the possibility of leaving and joining another community, or the fact that “real-life consequences” are much smaller.
I’m seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.
(Deleting this post, or the one I’m replying to, would count),
I’m not sure I understand what you mean here—are you saying that you are willing to try to increase existential risk by %0.0001 if someone deletes your post ???
If so, you’re a fucking despicable dick. But I may have misunderstood you.
I’ve found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.
That particular incident was not one in which Eliezer came across as sane (or stable). I don’t believe moderation itself is the subject of wfg’s criticism.
That … may be true. I’m not very interested in putting Eliezer on trial, its the kind of petty politics I try to avoid. He seems to be doing a pretty good job of teaching interesting and useful stuff, setting up a functional community, and defending unusual ideas while not looking like a total loon. I don’t think he needs “help” from any back seat drivers.
The impression I got of the whole Roko fiasco was that Eliezer was more concerned with avoiding nightmares in people he cared about than with the repercussions of Roko’s post on existential risk. But I didn’t dig into it very much—as I said, I’m not very interesting in he said / she said bickering. So I may be wrong in my impressions.
Please check out my other comments on this thread before replying, as it sounds like my reasoning isn’t fully clear to you.
Re: policing an online community I agree that there a lot of options to consider about how LW should be run, and that if people don’t like EY deleting their posts they’re free to try and set up their own LW in parallel. I don’t think it would be a good thing, or something we should encourage, but I agree it’s an option.
I also agree that some policing can help prevent a negative community from developing—that’s one reason I was glad to see that LW went with the reddit platform. It’s great at policing. I think it’s a big part of what makes LW is so successful.
That said, I also think that users should try other options rather than simply giving up on LW if they don’t like what’s going on. That’s what I’m doing here.
Re: 0.0001% You didn’t misunderstand me about the whole post deletion thing. To my mind 0.0001% isn’t that much compared to what the post deletion means about the future of LW. All this cloak-and-dagger silliness hurts the community. I’m doing my part to avoid further damage.
No one is going to delete it (I think? :p), so it doesn’t really matter either way.
To my mind 0.0001% isn’t that much compared to what the post deletion means about the future of LW.
You’re threatening to on average kill at least 6000 people, in order to get the moderation policy you prefer. You’re also not completely insensitive to how people appear to others. Would you like to reconsider how you’ve been going about achieving your aims?
I find it hard to relate to the way of thinking of someone who’s willing to increase the chances that humanity goes extinct if someone deletes his post from a forum on the internet.
Please go find another community to “help” with this kind of blackmail.
If I understand him correctly, what he’s trying to do is to precommit to doing something which increases ER, iff EY does something that he (wfg) believes will increase ER by a greater amount. Now he may or may not be correct in that belief, but it seems clear that his motivation is to decrease net ER by disincentivizing something he views as increasing ER.
I don’t understand why you’re so upset about LW posts being deleted, to the extent of being willing to increase existential risks just to prevent that from happening.
The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)
The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)
One reason would be if you believe that the act of suppressing a significant point of discussion of possible actions of an FAI matters rather a lot. “Don’t talk about the possibility of my ‘friendly’ AI torturing people” isn’t something that ought to engender confidence in a friendliness researcher.
(Deleting this post, or the one I’m replying to, would count),
Eliezer might delete it anyway, although I don’t expect it. You made a threat, not an offer. If the fiasco with Roko didn’t convince you that he takes decision theory seriously, what will?
Fair enough, we need to figure out a better way to navigate to the relevant part of “open thread” posts. The load comments above link doesn’t load comments below what’s above :-/
Usability, speaking the truth, and avoiding redundant comments are much more important than votes to me, if I could type it again i’d go with: please don’t reply unless you’ve read the whole thread.
Ahh, okay good. LW & EY are awesome—as I mentioned in the rest of this thread, I don’t want to change any more than the smallest bit necessary to avoid future censorship.
I’ll also say that censorship is a “hot button” issue for me, to the point that I’m not sure I want to continue helping SIAI. They went from nerdy-but-fun-to-talk-to/help to scary-cult-like-weirdos as soon as I read the article, and thought about what EYs reaction, and Roko’s removal meant.
I’m seriously considering brainstorming a list of easy ways to increase existential risks by %0.0001, and then performing one at random every time I hear such a reduction cited as the reason for silliness like this.
(Deleting this post, or the one I’m replying to, would count),
Can I just state categorically that this “ways to increase existential risks” thing is stupid and completely over the top.
We should be able to discuss things, sometimes even continuing discussion in private. We should not stoop to playing silly games of brinksmanship.
Geez, if we can’t eve avoid existential brinksmanship on the goddamn LW forum when the technology is hypothetical and the stakes are low, what hope in hell do we have when real politicians get wind of real progress in AGI?
No one asked or forced Roko to leave Less Wrong. Wounded by Eliezer’s public reprimand, Roko deleted all his comments and said that he is leaving.
(I for one wish he would come back. He was a valuable contributor.)
Correct. I was not asked to leave.
How much experience do you have with various online communities?
I’ve found that those with somewhat strict moderation by sane people have better discussion than those with little or no moderation.
I think “freedom of speech” has different connotations, and different consequences in online communities, compared to the real world. Anonimity makes a big difference, as does the possibility of leaving and joining another community, or the fact that “real-life consequences” are much smaller.
I’m not sure I understand what you mean here—are you saying that you are willing to try to increase existential risk by %0.0001 if someone deletes your post ???
If so, you’re a fucking despicable dick. But I may have misunderstood you.
That particular incident was not one in which Eliezer came across as sane (or stable). I don’t believe moderation itself is the subject of wfg’s criticism.
That … may be true. I’m not very interested in putting Eliezer on trial, its the kind of petty politics I try to avoid. He seems to be doing a pretty good job of teaching interesting and useful stuff, setting up a functional community, and defending unusual ideas while not looking like a total loon. I don’t think he needs “help” from any back seat drivers.
The impression I got of the whole Roko fiasco was that Eliezer was more concerned with avoiding nightmares in people he cared about than with the repercussions of Roko’s post on existential risk. But I didn’t dig into it very much—as I said, I’m not very interesting in he said / she said bickering. So I may be wrong in my impressions.
Hey Emile,
Please check out my other comments on this thread before replying, as it sounds like my reasoning isn’t fully clear to you.
Re: policing an online community I agree that there a lot of options to consider about how LW should be run, and that if people don’t like EY deleting their posts they’re free to try and set up their own LW in parallel. I don’t think it would be a good thing, or something we should encourage, but I agree it’s an option.
I also agree that some policing can help prevent a negative community from developing—that’s one reason I was glad to see that LW went with the reddit platform. It’s great at policing. I think it’s a big part of what makes LW is so successful.
That said, I also think that users should try other options rather than simply giving up on LW if they don’t like what’s going on. That’s what I’m doing here.
Re: 0.0001% You didn’t misunderstand me about the whole post deletion thing. To my mind 0.0001% isn’t that much compared to what the post deletion means about the future of LW. All this cloak-and-dagger silliness hurts the community. I’m doing my part to avoid further damage.
No one is going to delete it (I think? :p), so it doesn’t really matter either way.
-wfg
You’re threatening to on average kill at least 6000 people, in order to get the moderation policy you prefer. You’re also not completely insensitive to how people appear to others. Would you like to reconsider how you’ve been going about achieving your aims?
I find it hard to relate to the way of thinking of someone who’s willing to increase the chances that humanity goes extinct if someone deletes his post from a forum on the internet.
Please go find another community to “help” with this kind of blackmail.
If I understand him correctly, what he’s trying to do is to precommit to doing something which increases ER, iff EY does something that he (wfg) believes will increase ER by a greater amount. Now he may or may not be correct in that belief, but it seems clear that his motivation is to decrease net ER by disincentivizing something he views as increasing ER.
Right. Thanks for this post. People keep responding with knee-jerk reactions to the implementation rather than thought out ones to the idea :-/
Not that I can blame them, this seems to be an emotional topic for all of us.
Fair enough, go check out this article (and the wikipedia article on MAD) and see if it doesn’t make a bit more sense.
I don’t understand why you’re so upset about LW posts being deleted, to the extent of being willing to increase existential risks just to prevent that from happening.
The US government censors child pornography, details of nuclear weapon designs, etc., with penalty of imprisonment instead of just having a post deleted. If you care so much about censorship, why do you not focus your efforts on it instead? (Not to mention other countries like China and North Korea.)
One reason would be if you believe that the act of suppressing a significant point of discussion of possible actions of an FAI matters rather a lot. “Don’t talk about the possibility of my ‘friendly’ AI torturing people” isn’t something that ought to engender confidence in a friendliness researcher.
Eliezer might delete it anyway, although I don’t expect it. You made a threat, not an offer. If the fiasco with Roko didn’t convince you that he takes decision theory seriously, what will?
Threats and offers look identical to me after thinking about this some more—try swapping them out of a couple sentences.
They’re both simply telling someone that you’ll do something based on what they do.
Am I missing something?
(Please don’t vote unless you’ve read the whole thread found here)
I did not choose to downvote the parent based on this but I was tempted. I may have upvoted without the prescription.
Fair enough, we need to figure out a better way to navigate to the relevant part of “open thread” posts. The load comments above link doesn’t load comments below what’s above :-/
Usability, speaking the truth, and avoiding redundant comments are much more important than votes to me, if I could type it again i’d go with: please don’t reply unless you’ve read the whole thread.
I think the fact that he takes decision theory seriously is why he won’t delete it.
I don’t expect him to delete it. However, I don’t expect the threat made in the comment to be among the reasons he does not delete it.
Ahh, okay good. LW & EY are awesome—as I mentioned in the rest of this thread, I don’t want to change any more than the smallest bit necessary to avoid future censorship.
-wfg