One of the more disturbing topics in this post is the question of how much can you trust an organization of people who are willing to endure torture, rape, and death for their cause.
Surely lying isn’t as bad as any of those...
Of course, lying for your cause is almost certainly a long term retarded thing to do… but so is censoring ideas...
As you may know from your study of marketing, accusations stick in the mind even when one is explicitly told they are false. In the parent comment and a sibling, you describe a hypothetical SIAI lying to its donors because… Roko had some conversations with Carl that led you to believe we care strongly about existential risk reduction?
If your aim is to improve SIAI, to cause there to be good organizations in this space, and/or to cause Less Wrong-ers to have accurate info, you might consider:
Talking with SIAI and/or with Fellows program alumni, so as to gather information on the issues you are concerned about. (I’d be happy to talk to you; I suspect Jasen and various alumni would too.) And then
Informing folks on LW of anything interesting/useful that you find out.
Anyone else who is concerned about any SIAI-related issue is also welcome to talk to me/us.
accusations stick in the mind even when one is explicitly told they are false
Actually that citation is about both positive and negative things—so unless you’re also asking pro-SIAI people to hush up, you’re (perhaps unknowingly) seeking to cause a pro-SIAI bias.
Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them.
One of the interesting morals from Roko’s contest is that if you care deeply about getting the most benefit per donated dollar you have to look very closely at who you’re giving it to.
Market forces work really well for lightbulb-sales businesses, but not so well for mom-and-pop shops, let alone charities. The motivations, preferences, and likely future actions of the people you’re giving money to become very important. Knowing if you can believe the person, in these contexts, becomes even more important.
As you note, I’ve studied marketing, sales, propaganda, cults, and charities. I know that there are some people who have no problem lying for their cause (especially if it’s for their god or to save the world).
I also know that there are some people who absolutely suck at lying. They try to lie, but the truth just seeps out of them.
That’s why I give Roko’s blurted comments more weight than whatever I’d hear from SIAI people who were chosen by you—no offence. I’ll still talk with you guys, but I don’t think a reasonably sane person can trust the sales guy beyond a point.
As far as your question goes, my primary desire is a public, consistent moderation policy for LessWrong. If you’re going to call this a community blog devoted to rationality, then please behave in sane ways. (If no one owns the blog—if it belongs to the community—then why is there dictatorial post deletion?)
I’d also like an apology from EY with regard to the chilling effects his actions have caused.
But back to what you replied to:
What would SIAI be willing to lie to donors about?
To answer your question, despite David Gerard’s advice:
I would not lie to donors about the likely impact of their donations, the evidence concerning SIAI’s ability or inability to pull off projects, how we compare to other organizations aimed at existential risk reduction, etc. (I don’t have all the answers, but I aim for accuracy and revise my beliefs and my statements as evidence comes in; I’ve actively tried to gather info on whether we or FHI reduce risk more per dollar, and I often recommend to donors that they do their own legwork with that charity comparison to improve knowledge and incentives). If a maniacal donor with a gun came searching for a Jew I had hidden in my house, or if I somehow had a “how to destroy the world” recipe and someone asked me how to use it, I suppose lying would be more tempting.
While I cannot speak for others, I suspect that Michael Vassar, Eliezer, Jasen, and others feel similarly, especially about the “not lying to one’s cooperative partners” point.
I suppose I should add “unless the actual answer is not a trolley problem” to my advice on not answering this sort of hypothetical ;-)
(my usual answer to hypotheticals is “we have no plans along those lines”, because usually we really don’t. We’re also really good at not having opinions on other organisations, e.g. Wikileaks, which we’re getting asked about A LOT because their name starts with “wiki”. A blog post on the subject is imminent. Edit:up now.)
It’s very easy to not lie when talking about the future. It is much easier to “just this once” lie about the past. You can do both, for instance, by explaining that you believe a project will succeed, even while withholding information that would convince a donor otherwise.
An example of this would be errors or misconduct in completing past projects.
Lack of relevant qualifications for people SIAI plans to employ on a project.
Or administrative errors and misconduct.
Or public relations / donor outreach misconduct.
To put the question another, less abstract way, have you ever lied to a SIAI donor? Do you know of anyone affiliated with SIAI who has lied a donor?
Hypothetically, If I said I had evidence in the affirmative to the second question, how surprising would that be to you? How much money would you bet that such evidence doesn’t exist?
You’re trying very hard to get everyone to think that SIAI has lied to donors or done something equally dishonest. I agree that this is an appropriate question to discuss, but you are pursuing the matter so aggressively that I just have to ask: do you know something we don’t? Do you think that you/other donors have been lied to on a particular occasion, and if so, when?
An example of this would be errors or misconduct in completing past projects.
When I asked Anna about the coordination between SIAI and FHI, something like “Do you talk enough with each other that you wouldn’t both spend resources writing the same research paper?”, she was told me about the one time that they had in fact both presented a paper on the same topic at a conference, and that they do now coordinate more to prevent that sort of thing.
I have found that Anna and others at SIAI are honest and forthcoming.
Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them.
Well, uh, yeah. The horse has bolted. It’s entirely unclear what choosing to keep one’s head in the sand gains anyone.
What would SIAI be willing to lie to donors about?
Although this is a reasonable question to want the answer to, it’s obvious even to me that answering at all would be silly and no sensible person who had the answer would.
Investigating the logic or lack thereof behind the (apparently ongoing) memory-holing is, however, incredibly on-topic and relevant for LW.
Although this is a reasonable question to want the answer to, it’s obvious even to me that answering at all would be silly and no sensible person who had the answer would.
Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn’t they shut up?
I presume you’re not a native English speaker then—pretty much any moderately intelligent native English speaker has been forced to familiarity with 1984 at school. (When governments in the UK are being particularly authoritarian, there is often a call to send MPs copies of 1984 with a note “This is not a manual.”) Where are you from? Also, you really should read the book, then lots of the commentary on it :-) It’s one of the greatest works of science fiction and political fiction in English.
I can tell you all about equal pigs and newspeak but ‘memory-holing’ has not seemed to make as much of a cultural footprint—probably because as a phrase it is rather awkward fit. I wholeheartedly approve of Orwell in principle but actually reading either of his famous books sounds too much like highschool homework. :)
Animal Farm is probably passable (though it’s so short). 1984 on the other hand is maybe my favorite book of all time. I don’t think I’ve had a stronger emotional reaction to another book. It makes Shakespeare’s tragedies look like comedies. I’d imagine you’d have similar feelings about it based on what I’ve read of your comments here.
His less-famous novels aren’t as good. On the other hand, some of his essays are among the clearest, most intelligent thinking I’ve ever come across, and would probably be of a lot of interest to LessWrong readers...
Oh yeah. Politics and the English Language is a classic on a par with the great two novels. I first read that in 1992 and wanted to print copies to distribute everywhere (we didn’t have internet then).
Yeah, that’s one of those I was thinking of. Also things like the piece about the PEN ‘anti-censorship’ event that wasn’t, and his analysis of James Burnham’s Managerialist writing...
I’m terribly curious now—did the use of any of the phrases Orwell singles out in the article actually drop significantly after the article was published? Wikipedia will not say...
Well, reading it in the 1990s and having a burnt-out ex-Communist for a housemate at the time, I fear I recognised far too many of the cliches therein as current in those circles ;-)
A lot are still current in those less rational/more angry elements of the left who still think the Labour Party represents socialism and use phrases like that to justify themselves...
Because this is LessWrong—you can give a sane response and not only does it clear the air, people understand and appreciate it.
Cable news debating isn’t needed here.
Sure we might still wonder if they’re being perfectly honest, but saying something more sane on the topic than silence seems like a net-positive from their perspective.
LessWrongers are not magically free of bias. Nor are they inherently moral people that wouldn’t stoop to using misleading rhetorical techniques, though here they are more likely to be called on it.
In any case, an answer here is available to the public internet for all to see.
I respectfully disagree, and have my hopes set on Carl (or some other level-headed person in a position to know) giving a satisfying answer.
This is LessWrong after all—we can follow complicated arguments, and at least hearing how SIAI is actually thinking about such things would (probably) reduce my paranoia.
Yeah, but this is on the Internet for everyone to see. The potential for political abuse is ridiculous and can infect even LessWrong readers. Politics is the mind-killer, but pretending it doesn’t affect almost everyone else strikes me as not smart.
The concept of ethical injunctions is known in SIAI circles I think. Enduring personal harm for your cause and doing unethical things for your cause are therefore different. Consider Eliezer’s speculation about whether a rationalist “confessor” should ever lie in this post, too. And these personal struggles with whether to ever lie about SIAI’s work.
If banning Roko’s post would reasonably cause discussion of those ideas to move away from LessWrong, then by EY’s own reasoning (the link you gave) it seems like a retarded move.
If the idea is actually dangerous, it’s way less dangerous to people who aren’t familiar with pretty esoteric Lesswrongian ideas. They’re prerequisites to being vulnerable to it. So getting conversation about the idea away from Lesswrong isn’t an obviously retarded idea.
Lying for good causes has a time honored history. Protecting fugitive slaves or holocaust victims immediately comes to mind. Just because it is more often practical to be honest than not doesn’t mean that dishonesty isn’t sometimes unambiguously the better option.
I agree that there’s a lot in history, but the examples you cited have something that doesn’t match here—historically, you lie to people you don’t plan on cooperating with later.
If you lie to an oppressive government, it’s okay because it’ll either get overthrown or you’ll never want to cooperate with it (so great is your reason for lying).
Lying to your donor pool is very, very different than lying to the Nazis about hiding jews.
Is calling someone here Glenn Beck equivalent to Godwination?
wfg’s post strikes me as almost entirely reasonable (except the last question, which is pointless to ask) and your response as excessively defensive.
Also, you’re saying this to someone who says he’s a past donor and has not yet ruled out being a future donor. This is someone who could reasonably expect his questions to be taken seriously.
(I have some experience of involvement in a charity that suffers a relentless barrage of blitheringly stupid questions from idiots, and my volunteer role is media handling—mostly I come up with good and effective soundbites. So I appreciate and empathise with your frustration, but I think I can state with some experience behind me that your response is actually terrible.)
Okay. Given your and the folks who downvoted my comment’s perceptions, I’ll revise my opinion on the matter. I’ll also put that under “analogies not to use”; I was probably insufficiently familiar with the pop culture.
The thing I meant to say was just… Roko made a post, Nick suggested it gave bad impressions, Roko deleted it. wfg spent hours commenting again and again about how he had been asked to delete it, perhaps by someone “high up within SIAI”, and how future censorship might be imminent, how the fact that Roko had had a bascially unrelated conversation suggested that we might be lying to donors (a suggestion that he didn’t make explicitly, but rather left to innuendo), etc. I feel tired of this conversation and want to go back to research and writing, but I’m kind of concerned that it’ll leave a bad taste in readers mouths not because of any evidence that’s actually being advanced, but because innuendo and juxtapositions, taken out of context, leave impressions of badness.
I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn’t leave me feeling as though I couldn’t bow out after awhile and return to other projects.
The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.
So it can happen here—because it did happen. It’s no longer in the class “things that are unthinkable”. This is itself a major credibility hit for LW.
And when a Roko post disappears—well, it was one of his posts that was disappeared before.
With this being the situation, assumptions of bad faith are going to happen. (And “stupidity” is actually the assumption of good faith.)
Your problem now is to restore trust in LW’s intellectual integrity, because SIAI broke it good and hard. Note that this is breaking an expectation, which is much worse than breaking a rule—if you break a rule you can say “we broke this rule for this reason”, but if you break expectations, people feel the ground moving under their feet, and get very upset.
There are lots of suggestions in this thread as to what people think might restore their trust in LW’s intellectual integrity, SIAI needs to go through them and work out precisely what expectations they broke and how to come clean on this.
I suspect you could at this point do with an upside to all this. Fortunately, there’s an excellent one: no-one would bother making all this fuss if they didn’t really care about LW. People here really care about LW and will do whatever they can to help you make it better.
(And the downside is that this is separate from caring about SIAI, but oh well ;-) )
(and yes, this sort of discussion around WP/WMF has been perennial since it started.)
The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.
I’ve seen several variations of this expressed about this topic, and it’s interesting to me, because this sort of view is somewhat foreign to me. I wouldn’t say I’m pro-censorship, but as an attorney trained in U.S. law, I think I’ve very much internalized the idea that the most serious sorts of censorship actions are those taken by the government (i.e., this is what the First Amendment free speech right is about, and that makes sense because of the power of the government), and that there are various levels of seriousness/danger beyond that, with say, big corporate censorship also being somewhat serious because of corporate power, and censorship by the owner of a single blog (even a community one) not being very serious at all, because a blogowner is not very powerful compared to the government or a major corporation, and shutting down one outlet of communication on the Internet is comparatively not a big deal because it’s a big internet where there are lots of other places to express one’s views. If a siteowner exercises his or her right to delete something on a website, it’s just not the sort of harm that I weigh very heavily.
What I’m totally unsure of is where the average LW reader falls on the scale between you and me, and therefore, despite the talk about the Roko incident being such a public relations disaster and a “spectacular” deletion, I just don’t know how true that is and I’m curious what the answer would be. People who feel like me may just not feel the need to weigh in on the controversy, whereas people who are very strongly anti-censorship in this particular context do.
If a siteowner exercises his or her right to delete something on a website, it’s just not the sort of harm that I weigh very heavily.
That’s not really the crux of the issue (for me, at least, and probably not for others). As David Gerard put it, the banning of Roko’s post was a blow to people’s expectations, which was why it was so shocking. In other words, it was like discovering that LW wasn’t what everyone thought it was (and not in a good way).
Note: I personally wouldn’t classify the incident as a “disaster,” but was still very alarming.
The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.
Like Airedale, I don’t have that impression—my impression is that 1) Censorship by website’s owner doesn’t have the moral problems associated with censorship by governments (or corporations), and 2) in online communities, dictatorship can work quite well, as long as the dictator isn’t a complete dick.
I’ve seen quite functional communities where the moderators would delete posts without warning if they were too stupid, offensive, repetitive or immoral (such as bragging about vandalizing wikipedia).
So personally, I don’t see a need for “restoring trust”. Of course, as your post attests, my experience doesn’t seem to generalize to other posters.
Y’know, one of the actual problems with LW is that I read it in my Internet as Television time, but there’s a REALLY PROMINENT SCORE COUNTER at the top left. This does not help in not treating it as a winnable video game.
(That said, could the people mass-downvoting waitingforgodel please stop? It’s tiresome. Please try to go by comment, not poster.)
I wish I knew how to have a simple, high-content, low-politics conversation on the subject.
This is about politics. The censorship of an idea related to a future dictator implementing some policy is obviously about politics.
You tell people to take friendly AI serious. You tell people that we need friendly AI to marshal our future galactic civilisation. People take it serious. Now the only organisation working on this is the SIAI. Therefore the SIAI is currently in direct causal control of our collective future. So why do you wonder people care about censorship and transparency? People already care about what the U.S. is doing and demand transparency. Which is ludicrous in comparison to the power of a ruling superhuman artificial intelligence that implements what the SIAI came up with as the seed for its friendliness.
If you really think that the SIAI has any importance and could possible achieve to influence or implement the safeguards for some AGI project, then everything the SIAI does is obviously very important to everyone concerned (everyone indeed).
I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn’t leave me feeling as though I couldn’t bow out after awhile and return to other projects.
I wish you used a classification algorithm that more naturally identified the tension between “wanting low-politics conversation” and comparing someone to Glenn Beck as a means of criticism.
Sorry. This was probably simply a terrible mistake born of unusual ignorance of pop culture and current politics. I meant to invoke “using questions as a means to plant accusations” and honestly didn’t understand that he was radically unpopular. I’ve never watched anything by him.
One of the more disturbing topics in this post is the question of how much can you trust an organization of people who are willing to endure torture, rape, and death for their cause.
Surely lying isn’t as bad as any of those...
Of course, lying for your cause is almost certainly a long term retarded thing to do… but so is censoring ideas...
It’s hard to know what to trust on this thread
As you may know from your study of marketing, accusations stick in the mind even when one is explicitly told they are false. In the parent comment and a sibling, you describe a hypothetical SIAI lying to its donors because… Roko had some conversations with Carl that led you to believe we care strongly about existential risk reduction?
If your aim is to improve SIAI, to cause there to be good organizations in this space, and/or to cause Less Wrong-ers to have accurate info, you might consider:
Talking with SIAI and/or with Fellows program alumni, so as to gather information on the issues you are concerned about. (I’d be happy to talk to you; I suspect Jasen and various alumni would too.) And then
Informing folks on LW of anything interesting/useful that you find out.
Anyone else who is concerned about any SIAI-related issue is also welcome to talk to me/us.
Actually that citation is about both positive and negative things—so unless you’re also asking pro-SIAI people to hush up, you’re (perhaps unknowingly) seeking to cause a pro-SIAI bias.
Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them.
One of the interesting morals from Roko’s contest is that if you care deeply about getting the most benefit per donated dollar you have to look very closely at who you’re giving it to.
Market forces work really well for lightbulb-sales businesses, but not so well for mom-and-pop shops, let alone charities. The motivations, preferences, and likely future actions of the people you’re giving money to become very important. Knowing if you can believe the person, in these contexts, becomes even more important.
As you note, I’ve studied marketing, sales, propaganda, cults, and charities. I know that there are some people who have no problem lying for their cause (especially if it’s for their god or to save the world).
I also know that there are some people who absolutely suck at lying. They try to lie, but the truth just seeps out of them.
That’s why I give Roko’s blurted comments more weight than whatever I’d hear from SIAI people who were chosen by you—no offence. I’ll still talk with you guys, but I don’t think a reasonably sane person can trust the sales guy beyond a point.
As far as your question goes, my primary desire is a public, consistent moderation policy for LessWrong. If you’re going to call this a community blog devoted to rationality, then please behave in sane ways. (If no one owns the blog—if it belongs to the community—then why is there dictatorial post deletion?)
I’d also like an apology from EY with regard to the chilling effects his actions have caused.
But back to what you replied to:
What would SIAI be willing to lie to donors about?
Do you have any answers to this?
To answer your question, despite David Gerard’s advice:
I would not lie to donors about the likely impact of their donations, the evidence concerning SIAI’s ability or inability to pull off projects, how we compare to other organizations aimed at existential risk reduction, etc. (I don’t have all the answers, but I aim for accuracy and revise my beliefs and my statements as evidence comes in; I’ve actively tried to gather info on whether we or FHI reduce risk more per dollar, and I often recommend to donors that they do their own legwork with that charity comparison to improve knowledge and incentives). If a maniacal donor with a gun came searching for a Jew I had hidden in my house, or if I somehow had a “how to destroy the world” recipe and someone asked me how to use it, I suppose lying would be more tempting.
While I cannot speak for others, I suspect that Michael Vassar, Eliezer, Jasen, and others feel similarly, especially about the “not lying to one’s cooperative partners” point.
I suppose I should add “unless the actual answer is not a trolley problem” to my advice on not answering this sort of hypothetical ;-)
(my usual answer to hypotheticals is “we have no plans along those lines”, because usually we really don’t. We’re also really good at not having opinions on other organisations, e.g. Wikileaks, which we’re getting asked about A LOT because their name starts with “wiki”. A blog post on the subject is imminent. Edit: up now.)
I notice that your list is future facing.
Lies are usually about the past.
It’s very easy to not lie when talking about the future. It is much easier to “just this once” lie about the past. You can do both, for instance, by explaining that you believe a project will succeed, even while withholding information that would convince a donor otherwise.
An example of this would be errors or misconduct in completing past projects.
Lack of relevant qualifications for people SIAI plans to employ on a project.
Or administrative errors and misconduct.
Or public relations / donor outreach misconduct.
To put the question another, less abstract way, have you ever lied to a SIAI donor? Do you know of anyone affiliated with SIAI who has lied a donor?
Hypothetically, If I said I had evidence in the affirmative to the second question, how surprising would that be to you? How much money would you bet that such evidence doesn’t exist?
You’re trying very hard to get everyone to think that SIAI has lied to donors or done something equally dishonest. I agree that this is an appropriate question to discuss, but you are pursuing the matter so aggressively that I just have to ask: do you know something we don’t? Do you think that you/other donors have been lied to on a particular occasion, and if so, when?
When I asked Anna about the coordination between SIAI and FHI, something like “Do you talk enough with each other that you wouldn’t both spend resources writing the same research paper?”, she was told me about the one time that they had in fact both presented a paper on the same topic at a conference, and that they do now coordinate more to prevent that sort of thing.
I have found that Anna and others at SIAI are honest and forthcoming.
Your comment here killed the hostage.
Well, uh, yeah. The horse has bolted. It’s entirely unclear what choosing to keep one’s head in the sand gains anyone.
Although this is a reasonable question to want the answer to, it’s obvious even to me that answering at all would be silly and no sensible person who had the answer would.
Investigating the logic or lack thereof behind the (apparently ongoing) memory-holing is, however, incredibly on-topic and relevant for LW.
Total agreement here. In Eliezer’s words:
A fellow called George Orwell.
Ahh, thankyou.
I presume you’re not a native English speaker then—pretty much any moderately intelligent native English speaker has been forced to familiarity with 1984 at school. (When governments in the UK are being particularly authoritarian, there is often a call to send MPs copies of 1984 with a note “This is not a manual.”) Where are you from? Also, you really should read the book, then lots of the commentary on it :-) It’s one of the greatest works of science fiction and political fiction in English.
I can tell you all about equal pigs and newspeak but ‘memory-holing’ has not seemed to make as much of a cultural footprint—probably because as a phrase it is rather awkward fit. I wholeheartedly approve of Orwell in principle but actually reading either of his famous books sounds too much like highschool homework. :)
Animal Farm is probably passable (though it’s so short). 1984 on the other hand is maybe my favorite book of all time. I don’t think I’ve had a stronger emotional reaction to another book. It makes Shakespeare’s tragedies look like comedies. I’d imagine you’d have similar feelings about it based on what I’ve read of your comments here.
That’s some high praise there.
So I take it there isn’t a romantic ‘happily ever after’ ending? :P
Actually, there is… ;)
Both are short and enjoyable- I strongly recommend checking them out from a library or picking up a copy.
Read them. They’re actually really good books. His less-famous ones are not as brilliant, but are good too.
(We were taught 1984 in school, I promptly read to the end with eyes wide. I promptly borrowed Animal Farm of my own accord.)
His less-famous novels aren’t as good. On the other hand, some of his essays are among the clearest, most intelligent thinking I’ve ever come across, and would probably be of a lot of interest to LessWrong readers...
Oh yeah. Politics and the English Language is a classic on a par with the great two novels. I first read that in 1992 and wanted to print copies to distribute everywhere (we didn’t have internet then).
Yeah, that’s one of those I was thinking of. Also things like the piece about the PEN ‘anti-censorship’ event that wasn’t, and his analysis of James Burnham’s Managerialist writing...
I’m terribly curious now—did the use of any of the phrases Orwell singles out in the article actually drop significantly after the article was published? Wikipedia will not say...
Well, reading it in the 1990s and having a burnt-out ex-Communist for a housemate at the time, I fear I recognised far too many of the cliches therein as current in those circles ;-)
A lot are still current in those less rational/more angry elements of the left who still think the Labour Party represents socialism and use phrases like that to justify themselves...
Because this is LessWrong—you can give a sane response and not only does it clear the air, people understand and appreciate it.
Cable news debating isn’t needed here.
Sure we might still wonder if they’re being perfectly honest, but saying something more sane on the topic than silence seems like a net-positive from their perspective.
By way of a reminder, the question under discussion was:
LessWrongers are not magically free of bias. Nor are they inherently moral people that wouldn’t stoop to using misleading rhetorical techniques, though here they are more likely to be called on it.
In any case, an answer here is available to the public internet for all to see.
I respectfully disagree, and have my hopes set on Carl (or some other level-headed person in a position to know) giving a satisfying answer.
This is LessWrong after all—we can follow complicated arguments, and at least hearing how SIAI is actually thinking about such things would (probably) reduce my paranoia.
Yeah, but this is on the Internet for everyone to see. The potential for political abuse is ridiculous and can infect even LessWrong readers. Politics is the mind-killer, but pretending it doesn’t affect almost everyone else strikes me as not smart.
The concept of ethical injunctions is known in SIAI circles I think. Enduring personal harm for your cause and doing unethical things for your cause are therefore different. Consider Eliezer’s speculation about whether a rationalist “confessor” should ever lie in this post, too. And these personal struggles with whether to ever lie about SIAI’s work.
That “confessor” link is terrific
If banning Roko’s post would reasonably cause discussion of those ideas to move away from LessWrong, then by EY’s own reasoning (the link you gave) it seems like a retarded move.
Right?
If the idea is actually dangerous, it’s way less dangerous to people who aren’t familiar with pretty esoteric Lesswrongian ideas. They’re prerequisites to being vulnerable to it. So getting conversation about the idea away from Lesswrong isn’t an obviously retarded idea.
Lying for good causes has a time honored history. Protecting fugitive slaves or holocaust victims immediately comes to mind. Just because it is more often practical to be honest than not doesn’t mean that dishonesty isn’t sometimes unambiguously the better option.
I agree that there’s a lot in history, but the examples you cited have something that doesn’t match here—historically, you lie to people you don’t plan on cooperating with later.
If you lie to an oppressive government, it’s okay because it’ll either get overthrown or you’ll never want to cooperate with it (so great is your reason for lying).
Lying to your donor pool is very, very different than lying to the Nazis about hiding jews.
You’re throwing around accusations of lying pretty lightly.
Am I missing something? Desrtopa responded to questions of lying to the donor pool with the equivalent of “We do it for the greater good”
Desrtopa isn’t affiliated with SIAI. You seem to be deliberately designing confusing comments, a la Glenn Beck’s “I’m just asking questions” motif.
Is calling someone here Glenn Beck equivalent to Godwination?
wfg’s post strikes me as almost entirely reasonable (except the last question, which is pointless to ask) and your response as excessively defensive.
Also, you’re saying this to someone who says he’s a past donor and has not yet ruled out being a future donor. This is someone who could reasonably expect his questions to be taken seriously.
(I have some experience of involvement in a charity that suffers a relentless barrage of blitheringly stupid questions from idiots, and my volunteer role is media handling—mostly I come up with good and effective soundbites. So I appreciate and empathise with your frustration, but I think I can state with some experience behind me that your response is actually terrible.)
Okay. Given your and the folks who downvoted my comment’s perceptions, I’ll revise my opinion on the matter. I’ll also put that under “analogies not to use”; I was probably insufficiently familiar with the pop culture.
The thing I meant to say was just… Roko made a post, Nick suggested it gave bad impressions, Roko deleted it. wfg spent hours commenting again and again about how he had been asked to delete it, perhaps by someone “high up within SIAI”, and how future censorship might be imminent, how the fact that Roko had had a bascially unrelated conversation suggested that we might be lying to donors (a suggestion that he didn’t make explicitly, but rather left to innuendo), etc. I feel tired of this conversation and want to go back to research and writing, but I’m kind of concerned that it’ll leave a bad taste in readers mouths not because of any evidence that’s actually being advanced, but because innuendo and juxtapositions, taken out of context, leave impressions of badness.
I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn’t leave me feeling as though I couldn’t bow out after awhile and return to other projects.
The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.
So it can happen here—because it did happen. It’s no longer in the class “things that are unthinkable”. This is itself a major credibility hit for LW.
And when a Roko post disappears—well, it was one of his posts that was disappeared before.
With this being the situation, assumptions of bad faith are going to happen. (And “stupidity” is actually the assumption of good faith.)
Your problem now is to restore trust in LW’s intellectual integrity, because SIAI broke it good and hard. Note that this is breaking an expectation, which is much worse than breaking a rule—if you break a rule you can say “we broke this rule for this reason”, but if you break expectations, people feel the ground moving under their feet, and get very upset.
There are lots of suggestions in this thread as to what people think might restore their trust in LW’s intellectual integrity, SIAI needs to go through them and work out precisely what expectations they broke and how to come clean on this.
I suspect you could at this point do with an upside to all this. Fortunately, there’s an excellent one: no-one would bother making all this fuss if they didn’t really care about LW. People here really care about LW and will do whatever they can to help you make it better.
(And the downside is that this is separate from caring about SIAI, but oh well ;-) )
(and yes, this sort of discussion around WP/WMF has been perennial since it started.)
I’ve seen several variations of this expressed about this topic, and it’s interesting to me, because this sort of view is somewhat foreign to me. I wouldn’t say I’m pro-censorship, but as an attorney trained in U.S. law, I think I’ve very much internalized the idea that the most serious sorts of censorship actions are those taken by the government (i.e., this is what the First Amendment free speech right is about, and that makes sense because of the power of the government), and that there are various levels of seriousness/danger beyond that, with say, big corporate censorship also being somewhat serious because of corporate power, and censorship by the owner of a single blog (even a community one) not being very serious at all, because a blogowner is not very powerful compared to the government or a major corporation, and shutting down one outlet of communication on the Internet is comparatively not a big deal because it’s a big internet where there are lots of other places to express one’s views. If a siteowner exercises his or her right to delete something on a website, it’s just not the sort of harm that I weigh very heavily.
What I’m totally unsure of is where the average LW reader falls on the scale between you and me, and therefore, despite the talk about the Roko incident being such a public relations disaster and a “spectacular” deletion, I just don’t know how true that is and I’m curious what the answer would be. People who feel like me may just not feel the need to weigh in on the controversy, whereas people who are very strongly anti-censorship in this particular context do.
That’s not really the crux of the issue (for me, at least, and probably not for others). As David Gerard put it, the banning of Roko’s post was a blow to people’s expectations, which was why it was so shocking. In other words, it was like discovering that LW wasn’t what everyone thought it was (and not in a good way).
Note: I personally wouldn’t classify the incident as a “disaster,” but was still very alarming.
Like Airedale, I don’t have that impression—my impression is that 1) Censorship by website’s owner doesn’t have the moral problems associated with censorship by governments (or corporations), and 2) in online communities, dictatorship can work quite well, as long as the dictator isn’t a complete dick.
I’ve seen quite functional communities where the moderators would delete posts without warning if they were too stupid, offensive, repetitive or immoral (such as bragging about vandalizing wikipedia).
So personally, I don’t see a need for “restoring trust”. Of course, as your post attests, my experience doesn’t seem to generalize to other posters.
Great post. It confuses me why this isn’t at 10+ karma
+5 is fine!
Y’know, one of the actual problems with LW is that I read it in my Internet as Television time, but there’s a REALLY PROMINENT SCORE COUNTER at the top left. This does not help in not treating it as a winnable video game.
(That said, could the people mass-downvoting waitingforgodel please stop? It’s tiresome. Please try to go by comment, not poster.)
So true!
(Except it’s at the top right. At least, the one I’m thinking of.)
The other left.
(Yes, I actually just confused left and right. STOP POSTING.)
Probably because its buried in the middle of an enormous discussion that very few people have read and will read.
Lol. right, that’d do it
This is about politics. The censorship of an idea related to a future dictator implementing some policy is obviously about politics.
You tell people to take friendly AI serious. You tell people that we need friendly AI to marshal our future galactic civilisation. People take it serious. Now the only organisation working on this is the SIAI. Therefore the SIAI is currently in direct causal control of our collective future. So why do you wonder people care about censorship and transparency? People already care about what the U.S. is doing and demand transparency. Which is ludicrous in comparison to the power of a ruling superhuman artificial intelligence that implements what the SIAI came up with as the seed for its friendliness.
If you really think that the SIAI has any importance and could possible achieve to influence or implement the safeguards for some AGI project, then everything the SIAI does is obviously very important to everyone concerned (everyone indeed).
What? No way! The organisation seems very unlikely to produce machine intelligence to me—due to all the other vastly-better funded players.
I wish you used a classification algorithm that more naturally identified the tension between “wanting low-politics conversation” and comparing someone to Glenn Beck as a means of criticism.
Sorry. This was probably simply a terrible mistake born of unusual ignorance of pop culture and current politics. I meant to invoke “using questions as a means to plant accusations” and honestly didn’t understand that he was radically unpopular. I’ve never watched anything by him.
Well, it’s not that Beck is unpopular; it’s that he’s very popular with people of a particular political ideology.
In fairness, though, he is sort of the canonical example for “I’m just asking questions, here!”. (And I wasn’t one of those voting you down on this.)
I think referring to the phenomenon itself is enough to make one’s point on the issue, and it’s not necessary to identify a person who does it a lot.
-3 after less than 15 minutes suggests so!
Make that “they do it for the greater good”
Sorry about mistakingly implying s/he was affiliated. I’ll be more diligent with my google stalking in the future.
edit: In my defense, SIAI affiliation has been very common when looking up very “pro” people from this thread
Thanks. I appreciate that.