So I’m going to say this here rather than anywhere else, but I think Eliezer’s approach to this has been completely wrong headed. His response has always come tinged with a hint of outrage and upset. He may even be right to be that upset and angry about the internet’s reaction to this, but I don’t think it looks good! From a PR perspective, I would personally stick with an amused tone. Something like:
“Hi, Eliezer here. Yeah, that whole thing was kind of a mess! I over-reacted, everyone else over-reacted to my over-reaction… just urgh. To clear things up, no, I didn’t take the whole basilisk thing seriously, but some members did and got upset about it, I got upset, it all got a bit messy. It wasn’t my or anyone else’s best day, but we all have bad moments on the internet. Sadly the thing about being moderately internet famous is your silly over reactions get captured in carbonite forever! I have done/ written lots of more sensible things since then, which you can check out over at less wrong :)”
Obviously not exactly that, but I think that kind of tone would come across a lot more persuasively than the angry hectoring tone currently adopted whenever this subject comes up.
In his defense, is it possible EY can’t win at this point, regardless of his approach? Maybe the internet has grabbed this thing and the PR whirlwinds are going to do with it whatever they like?
I’ve read apologies from EY where he seems to admit pretty clearly he screwed up. He comes off as defensive and pissy sometimes in my opinion, but he seems sincerely irked about how RW and other outlets have twisted to whole story to discredit LW and himself. From my recall, one comment he made on the reddit sub dedicated to his HP fanfic indicated he was very hurt by the whole kerfuffle, in addition to his obvious frustration.
At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It’s our involuntary mascot, and we might as well ‘reclaim’ it in the social justice sense.
Then every time someone brings up “Less Wrong is terrified of the basilisk” we can just be like “Yes! Yes we are! Would you like to buy a plush one?” and everyone will appreciate our ability to laugh at ourselves, and they’ll go back to whatever they were doing.
Paperclip maximizer, obviously. Basilisks typically are static entities, and I’m not sure how you would go about making a credible anti-paperclip ‘infohazard’.
The same way as an infohazard for any other intelligence: acausally threaten to destroy lots of paperclips, maybe even uncurl them, maybe even uncurl them while they were still holding a stack of pap-ARRRRGH I’LL DO WHATEVER YOU WANT JUST DON’T HURT THEM PLEASE
That depends entirely on what the PM’s code is. If it doesn’t include input sanitizers, a buffer overflow attack could suffice as a basilisk. If your model of a PM basilisk is “Something that would constitute a logical argument that would harm a PM”, then you’re operating on a very limited understanding of basilisks.
“selling little plush basilisks as a MIRI fundraiser.”
By “selling”, do you mean giving basilisks to people who give money? It seems like a more appropriate policy would be giving a plush basilisk to anyone who doesn’t give money.
It should be a snake, only with little flashing LEDs in its eyes.
The canonical basilisk paralyzes you if you look at it. Flickering lights carry the danger of triggering photosensitive epilepsy, and thus are sort of real-life basilisks. Even if the epilepsy reference is lost on many, it’s still clearly a giant snake thing with weird eyes and importantly you can probably get from somewhere without having to custom make them.
(AFAIK Little LEDs should be too small to actually represent a threat to epileptics, and it shouldn’t be any worse than any of the other flickering lights.)
EDIT: Eh, I suppose it could also be stuffed with paperclips or something, if we want to pack as many memes in as possible.
It’s not a matter of “winning” or “not winning”. The phrase “damage control” was coined for a reason—it’s not about reversing the damage, it’s about making sure that the damage gets handled properly.
So seen through that lens, the question is whether EY is doing a good or bad job of controlling the damage. I personally think that having a page on Less Wrong that explains (and defangs) the Basilisk, along with his reaction to it and why that reaction was wrong (and all done with no jargon or big words for when it gets linked from somewhere, and also all done without any sarcasm, frustration, hurt feelings, accusations, or defensiveness) would be the first best step. I can tell he’s trying, but think that with the knowledge that the Basilisk is going to be talked about for years to come a standardized, tone-controlled, centralized, and readily accessible response is warranted.
I am defining winning as damage control. EY has been trying to control the damage, and in that pursuit, I’m starting to wonder if damage control, to the extent it could be considered successful by many people, is even possible.
He’s a public figure + He made a mistake = People are going to try and get mileage out of this, no matter how he handles it. That’s very predictable.
Further, it’s very easy to come along after the fact and say, “he should have done this and all the bad press could have been avoided!”
A page on LW might work. Or it might be more fodder for critics. If there were an easy answer to how to win via damage control, then in wouldn’t be quite as tricky as it always seems to be.
It’s still a matter of limiting the mileage. Even if there is no formalized and ready-to-fire response (one that hasn’t been written in the heat of the moment), there’s always an option not to engage. Which is what I said last time he engaged, and before he engaged this time (and also after the fact). If you engage, you get stuff like this post to /r/SubredditDrama, and comments about thin skin that not even Yudkowsky really disagrees with.
It doesn’t take hindsight (or even that much knowledge of human psychology and/or public relations) to see that making a twelve paragraph comment about RationalWiki absent anyone bringing RationalWiki up is not an optimal damage control strategy.
And if you posit that there’s no point to damage control, why even make a comment like that?
I didn’t posit there is no point to damage control. I’m saying that in certain cases, people are criticized equally no matter what they do.
If someone chooses not to engage, they are hiding something. If they engage, they are giving the inquisitor what he wants. If they jest about their mistake, they are not remorseful. If they are somber, they are taking it too seriously and making things worse.
I read your links and...yikes...this new round of responses is pretty bad. I guess part of me feels bad for EY. It was a mistake. He’s human. The internet is ruthless…
Let me chime in briefly. The way EY handles this issue tends to be bad as a rule. This is a blind spot in his otherwise brilliant, well, everything.
A recent example: a few months ago a bunch of members of the official Less Wrong group on Facebook were banished and blocked from viewing it without receiving a single warning. Several among them, myself included, had one thing in common: participation in threads about the Slate article.
I myself didn’t care much about it. Participation in that group wasn’t a huge part of my Facebook life, although admittedly it was informative. The point is just that doing things like these, and continuing to do things like these, accrete a bad reputation around EY.
It really amazes me he has so much difficulty calibrating for the Streisand Effect.
That was part of a brief effort on my part to ban everyone making stupid comments within the LW Facebook Group, which I hadn’t actually realized existed but which I was informed was giving people terrible impressions. I deleted multiple posts and banned all commenters who I thought had made stupid comments on them; the “hur hur basilisk mockery” crowd was only one, but I think a perfectly legitimate target for this general sweep. It’s still a pretty low-quality group, but it’s a lot better than it was before I went through and banned everyone who I saw making more than one stupid comment.
Unfortunately Facebook doesn’t seem to have an easy “delete comment and ban commenter from Group” procedure for Android, which makes it harder to repeat this procedure because Android is most of where I check Facebook.
What does “stupid” refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I’m asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I’m thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.
Thanks for letting us know what happened. I’m one of the Facebook members who were banned, and I’ve spent these months wondering what I might have done wrong. May I at least know what was the stupid thing I said? And is there any atonement procedure to get back in the Facebook group?
So just to be clear: If I say “I won’t give into the basalisk because Eliezer says I shouldn’t”, will that protect me from the basilisk? If not, what should I do?
If you believe Eliezer, then you may believe him that the basilisk has ~0 probability of occurring. (I should find a citation for that, but I read it just a few minutes ago, somewhere around the discussion of this xkcd comic.) So you are already protected from it, because it does not exist (not even in ways relevant to acausal trade).
More broadly, you should decide to take this approach: never give into blackmail by somebody who knows that you have decided to take this approach. Now they have no incentive to blackmail you, and you are safe, even if they do exist! (I think that the strategy in this paragraph has been endorsed by Eliezer, but don’t trust me on that until you get a citation. Until then, you’ll have to reason it out for yourself.)
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).
I’ve read apologies from EY where he seems to admit pretty clearly he screwed up.
But he did still continue to delete basilisk related discussion afterwards. As far as I understand he never apologized to Roko for deleting the post or wrote an LW post apologizing.
My response in EY’s place would probably be, “I’m a person who had trained himself to take ideas seriously [insert link on Taking Ideas Seriously]. I thought there might be a risk at the time, I acted quickly, and upon further thought it turned out I was wrong and yes, that’s fairly embarrassing in hindsight. That’s one of the pitfalls of Taking Ideas Seriously—you’re more likely to embarrass yourself. But imagine the alternative, where there really is a threat, and people kept quiet because they didn’t want to be embarrassed. From that perspective, I think that the way I acted on the spur of the moment was understandable”.
[Edit: this is apparently not what happened and there may or may not be some sort of smear campaign or something distorting everything, although I’m confused at why it was banned here then. I’m not really sure what’ actually happened now, oh well… either way, whatever actually happened, I taking a general stance of judging people mostly by accomplishments and good ideas rather than mistakes and bad ideas, except in cases of actual harm done.]
That’s not what actually happened; his first comment on the eventually-banned thread said that he didn’t believe in the threat. But yes, that would be a good response if that’s what had happened; he might have to say something like this some day.
That’s pretty much what he did here, except perhaps the tone isn’t quite so modest and has a bit of that status-regulation-blind thing Eliezer often has going on.
If you were feeling uncharitable, you could say that the “lack of status regulation emotions” thing is yet another concept in a long line of concepts that already had names before Eliezer/someone independently discovers them and proceeds to give them a new LW name.
you could say that the “lack of status regulation emotions” thing [...] already had names [...]
It’s sillier than that. It’s attempting to invent a new, hitherto undescribed emotion to explain behavior that’s covered perfectly well by the ordinary vocabulary of social competence, which includes for example words like “tact”. There are also words to describe neurological deviations resulting among other things in a pathological lack of tact, but they too have little to do with emotion.
(Strictly speaking, there are status-regulation emotions, and they are called things like shame and envy. But that clearly isn’t what Eliezer was talking about.)
But what Eliezer is describing is not a “new, hitherto undescribed emotion”, it’s really just a chronic, low-intensity activation of well-known emotional states like shame and embarrassment. Many people nowadays believe that ‘microaggressions’ exist and are a fairly big factor in folks’ self-esteem and even their ordinary functioning. But that too used to be a “new, undescribed phenomenon”! So why would we want to reject what Eliezer calls “status regulation” which is even less radical, being just a minor twist on what was previously known?
In the Facebook post that sparked this, Mysterious Emotion X is clearly described in terms of other-regulation: a “status slapdown emotion”. Shame and embarrassment, chronic and low-grade or otherwise, are directed at self-regulation, so they aren’t a good fit. Envy (and “a sense that someone else has something that I deserve more”, which sounds to me like resentment) is specifically excluded, so it’s not that either.
I’m pretty skeptical of the microaggression model too, but this isn’t the place to be talking about that, if there exists such a place.
Well, same difference really. An other-regarding ‘status slapdown’ emotion can be described fairly easily as a low-intensity mixture of outrage and contempt, both of which are well-known emotions and not “undescribed” at all. It could be most pithily characterized as the counter emotion to loyalty or devotion, which involves an attribution of higher status based on social roles or norms.
I don’t think either of those work. The situation in which this applies, according to Eliezer, is quite specific: another person makes a status claim which you feel is undeserved, so you feel Mysterious Emotion X toward them. It’s neither chronic nor low-grade: the context here was of HJPEV schooling his teachers and the violently poor reception that met among some readers of HPMOR. (For what it’s worth, I didn’t mind… but I was once the iniquitous little shit that Harry’s being. I expect these readers are identifying with McGonagall instead.) He’s also pretty clear about believing this to be outside the generally accepted array of human emotions: he mentions envy, hate, and resentment among others as things which this is not, which pretty much covers the bases in context.
More than the specific attribution, though, it’s the gee-whiz tone and intimation of originality that rubs me the wrong way. If he’d described it in terms of well-known emotions or even suggested that you could, my objection would evaporate. But he didn’t.
I don’t think that the thing Eliezer called “lack of status regulation emotions” that makes some people angry when they read how Harry in HPMOR interacts with teachers is what commonly called ego or lack of ego.
Fair enough. “Lack of status regulation emotions” is a bit more narrow, perhaps? Either way I see them as very similar concepts, and in the context of HPMOR readers’ anger especially so.
If someone who is high status lacks status regulation emotions they will be nice to a person with low status who seeks help from them and treats them as an equal.
That’s the opposite behavior of what’s commonly called having an ego.
More generally, someone who lacks status-regulating emotions won’t have a fragile, hypersensitive ego, i.e. what most people (though by no means all) usually mean by “having a massive ego” or an “ego problem”. Note that by this definition, many people whose self-esteem is founded in clear and verifiable achievements would be said to “lack status-regulating emotions”. In many circumstances, it’s not viewed as a negative trait.
I’ve had experience with what I think is the same thing that Eliezer called “lack of status regulation emotions”, and I do think it’s more than “narcissisticly big ego” and more than “unmotivated and unfortunate status blindness”.
It’s not that I couldn’t see the normal status levels. It’s just that I thought they were stupid and irrelevant (hah!) so I just went off my own internal status values. If you could back up your arguments, you had my respect. If you couldn’t and got defensive instead, you didn’t. And I wasn’t gonna pretend to respect someone just because everyone else thought I was out of line. Because.… well, they’re wrong. And I was totally unaware of this at the time because it was just baked into the background of how I saw things.
Good things did come of it, but I definitely stepped on toes, and in those cases it definitely came off like “big ego”.
And in a sense it was, just not in the straightforwardly narcissistic “I’m smarter than you so I don’t have to treat you with respect” way. Just in the “I’m smarter at the ‘not acting smarter than I am’ game, and that is why I don’t have to treat you with respect” way, which, although better, isn’t all that laudable either.
Ah, if the status regulation emotions go both ways, perhaps.
But Eliezer seemed to be referring to how people got angry at how Harry didn’t treat McGonagall in a manner befitting her higher status—this can be attributed to lack of status regulation emotions on the part of Harry, or Harry having a massive ego.
Harry also doesn’t have respect due to status regulation but that’s not enough to get someone reading the story angry. I personally found it quite funny. But then I also don’t put much value on that kind of status. It’s the kind of people with a strong status related emotions who get annoyed by the story.
If someone who is high status lacks status regulation emotions they will be nice to a person with low status who seeks help from them and treats them as an equal.
This is a nice differentiation that I can relate to well. I also do not seem to possess status regulating emotions either (at least enough to notice myself). And I do treat all people the same (mostly cheritable) independent of their status. Actually I discovered the concept of status quite late (Ayla and the Clan of the Cave Bear if I remember right) and couldn’t make sense of it for quite some time.
Yeah I’ve read that and I feel like it’s a miss (at least for me). It’s an all together too serious and non-self deprecating take on the issue. I appreciate that in that post Eliezer is trying to correct a lot of mis perceptions at once but my problem with that is
a)a lot of people won’t actually know about all these attacks (I’d read the rational wiki article, which I don’t think is nearly as bad as Eliezer says (that is possibly due to its content having altered over time!)), and responding to them all actually gives them the oxygen of publicity.
b)When you’ve made a mistake the correct action (in my opinion ) is to go “yup, I messed up at that point”, give a very short explanation of why, and try to move on. Going into extreme detail gives the impression that Eliezer isn’t terribly sorry for his behaviour. Maybe he isn’t, but from a PR perspective it would be better to look sorry. Sometimes it’s better to move on from an argument rather than trying to keep having it!
Further to that last point, I’ve foudn that Eliezer often engages with dissent by having a full argument with the person who is dissenting. Now this might be a good strategy from the point of view of persuading the dissenter: if I come in and say cyronics sux then a reasoned response might change my mind. But by engaging so thoroughly with dissent when it occurs it actually makes him look more fighty.
I’m thinking here about how it appears to outside observers: just as with a formal debate the goal isn’t to convince the person you are arguing with, it is to convince the audience, with PR the point isn’t to defeat the dissenter with your marvellous wordplay, it is to convince the audience that you are more sane than the dissenter.
Obviously these are my perceptions of how Eliezer comes across, I could easily be an exception.
that status-regulation-blind thing Eliezer often has going on.
Maybe he should have it going on, and damn the consequences. Sometimes you have to get up and say, these are the facts, you are wrong. Not the vapid temporising recommended by thakil.
There are some times when a fight is worth having, and sometimes when it will do more harm than good. With regards to this controversy, I think that the latter approach will work better than the former. I could, of course, be wrong.
I am imaging here a reddit user who has vaguely heard of less wrong, and then reads rational wiki’s article on the basilisk (or now, I suppose, an xkcd reader who does similar). I think that their take away from that reddit argument posted by Eliezer might be to think again about the rational wiki article, but I don’t think they’d be particularly attracted to reading more of what Eliezer has written. Given that I rather enjoy the vast majority of what Eliezer has written, I feel like that’s a shame.
To you really think that’s how people discover websites?
I think it’s much more likely that someone clicks on a link to a LW post. If the post is interesting he might browse around LW and if he finds interesting content he will come back.
Not everyone. But I think an xkcd comic about the AI box experiment would be an opportunity to let everyone know about less wrong, not to have another argument about the basilisk which is a distraction.
The expression “Damn the consequences” is generally, and in this case, a hyperbole. The consequences being dismissed are those the speaker considers worthy of dismissal in the face of the consequences that truly matter.
A non-figurative version of my comment would be that in the case at hand, putting the actual facts out, as clearly and forthrightly as possible, is the most important thing to do, and concern with supposed reputational damage from saying what is right and ignoring what is irrelevant would be not merely wasted motion, but actively harmful.
But then, I’ll excuse quite a lot of arrogance, in someone who has something to be arrogant about.
If it decreases the number of people who take you seriously and therefore learn bout the substance of your ideas its a bad strategy
And if it increases the number of people who take you seriously, and therefore learn about the substance of your ideas, it’s a good strategy. I’m sure we can all agree that if something were bad, it would be bad, and if it were good, it would be good. Your point?
I think there are potential benefits to both methods, and I also don’t think that they’re necessarily mutually exclusive strategies. At the moment, I would lean towards pure honesty and truth oriented explanation as being most important as well. I also think that he could do all of that while stilll minimizing the ‘status smackdown response’, which in that reddit post he did a little of, but I think it’s possible that he could have done a little more while still retaining full integrity with regards to telling it like it is.
But whatever happens, anything is better than that gag order silliness.
Of course he will be. Therefore he should consider getting not-terrible at it. Well, I spy with my little eye an xkcd forum post by EY, so let’s see...
Does MIRI have a public relations person? They should really be dealing with this stuff. Eleizer is an amazing writer but he’s not particularly suited to addressing a non-expert crowd
I am no PR specialist, but I think relevant folks should agree on a simple, sensible message accessible to non-experts, and then just hammer that same message relentlessly. So, e.g. why mention “Newcomb-like problems?” Like 10 people in the world know what you really mean. For example:
(a) The original thing was an overreaction,
(b) It is a sensible social norm to remove triggering stimuli, and Roko’s basilisk was an anxiety trigger for some people,
(c) In fact, there is an entire area of decision theory involving counterfactual copies, blackmail, etc. behind the thought experiment, just as there is quantum mechanics behind Schrodinger’s cat. Once you are done sniggering about those weirdos with a half-alive half-dead cat, you might want to look into serious work done there.
What you want to fight with the message is the perception that you are a weirdo cult/religion. I am very sympathetic to what is happening here, but this is, to use the local language, “a Slytherin problem,” not “a Ravenclaw problem.”
I expect in 10 years if/when MIRI gets a ton of real published work under its belt, this is going to go away, or at least morph into “eccentric academics being eccentric.”
p.s. This should be obvious: don’t lie on the internet.
Further: If you search for “lesswrong roko basilisk” the top result is the RationalWiki article (at least, for me on Google right now) and nowhere on the first page is there anything with any input from Eliezer or (so far as such a thing exists) the LW community.
There should be a clear, matter-of-fact article on (let’s say) the LessWrong wiki, preferably authored by Eliezer (but also preferably taking something more like the tone Ilya proposes than most of Eliezer’s comments on the issue) to which people curious about the affair can be pointed.
(Why haven’t I made one, if I think this? Because I suspect opinions on this point are strongly divided and it would be sad for there to be such an article but for its history to be full of deletions and reversions and infighting. I think that would be less likely to happen if the page were made by someone of high LW-status who’s generally been on Team Shut Up About The Basilisk Already.)
Well, I think your suggestion is very good and barely needs any modification before being put into practice.
Comparing what you’ve suggested to Eliezer’s response on the comments of xkcd’s reddit post for the comic, I think he would do well to think about something along the lines of what you’ve advised. I’m really not sure all the finger pointing he’s done helps, nor the serious business tone.
This all seems like a missed opportunity for Eliezer and MIRI. XKCD talks about about the dangers of superintelligence to its massive audience, and instead of being able to use that new attention to get the word out your organisation’s important work, the whole thing instead gets mired down in internet drama about the basilisk for the trillionth time, and a huge part of a lot of people’s limited exposure to LW and MIRI is negative or silly.
I think that your suggestion is good enough that I’ve posted it over on the xkcd threads with attribution. (I’m pretty certain I have the highest xkcd postcount of any LWer, and probably people there remember my name somewhat favorably.)
A better way to stop people pointing and laughing is to do it better than them. Eliezer could probably write something funny along the lines of “I got Streisanded good, didn’t I? That’ll learn me!” Or something else, as long as it is funnier than xkcd or smbc can possibly come up with.
(ok, I deleted my duplicate post then)
Also worth mentioning: the Forum thread, in which Eliezer chimes in.
So I’m going to say this here rather than anywhere else, but I think Eliezer’s approach to this has been completely wrong headed. His response has always come tinged with a hint of outrage and upset. He may even be right to be that upset and angry about the internet’s reaction to this, but I don’t think it looks good! From a PR perspective, I would personally stick with an amused tone. Something like:
“Hi, Eliezer here. Yeah, that whole thing was kind of a mess! I over-reacted, everyone else over-reacted to my over-reaction… just urgh. To clear things up, no, I didn’t take the whole basilisk thing seriously, but some members did and got upset about it, I got upset, it all got a bit messy. It wasn’t my or anyone else’s best day, but we all have bad moments on the internet. Sadly the thing about being moderately internet famous is your silly over reactions get captured in carbonite forever! I have done/ written lots of more sensible things since then, which you can check out over at less wrong :)”
Obviously not exactly that, but I think that kind of tone would come across a lot more persuasively than the angry hectoring tone currently adopted whenever this subject comes up.
In his defense, is it possible EY can’t win at this point, regardless of his approach? Maybe the internet has grabbed this thing and the PR whirlwinds are going to do with it whatever they like?
I’ve read apologies from EY where he seems to admit pretty clearly he screwed up. He comes off as defensive and pissy sometimes in my opinion, but he seems sincerely irked about how RW and other outlets have twisted to whole story to discredit LW and himself. From my recall, one comment he made on the reddit sub dedicated to his HP fanfic indicated he was very hurt by the whole kerfuffle, in addition to his obvious frustration.
At this point I think the winning move is rolling with it and selling little plush basilisks as a MIRI fundraiser. It’s our involuntary mascot, and we might as well ‘reclaim’ it in the social justice sense.
Then every time someone brings up “Less Wrong is terrified of the basilisk” we can just be like “Yes! Yes we are! Would you like to buy a plush one?” and everyone will appreciate our ability to laugh at ourselves, and they’ll go back to whatever they were doing.
Blasphemy, our mascot is a paperclip.
I’d prefer a paperclip dispenser with something like “Paperclip Maximizer (version 0.1)” written on it.
But a plush paperclip would probably not hold its shape very well, and become a plush basilisk.
Close enough
I feel the need to switch from Nerd Mode to Dork Mode and ask:
Which would win in a fight, a basilisk or a paperclip maximizer?
Paperclip maximizer, obviously. Basilisks typically are static entities, and I’m not sure how you would go about making a credible anti-paperclip ‘infohazard’.
The same way as an infohazard for any other intelligence: acausally threaten to destroy lots of paperclips, maybe even uncurl them, maybe even uncurl them while they were still holding a stack of pap-ARRRRGH I’LL DO WHATEVER YOU WANT JUST DON’T HURT THEM PLEASE
That depends entirely on what the PM’s code is. If it doesn’t include input sanitizers, a buffer overflow attack could suffice as a basilisk. If your model of a PM basilisk is “Something that would constitute a logical argument that would harm a PM”, then you’re operating on a very limited understanding of basilisks.
Hm. Turn your weakness into a plush toy then sell it to raise money and disarm your critics. Winning.
Excellent idea. I would buy that, especially if it has a really bizarre design.
I’d like merchandise-based tribal allegiance membership signalling items anyway. Anyone selling MIRI mugs or LessWrong T-shirts can expect money from me.
“selling little plush basilisks as a MIRI fundraiser.”
By “selling”, do you mean giving basilisks to people who give money? It seems like a more appropriate policy would be giving a plush basilisk to anyone who doesn’t give money.
Sound like the first step a Plush Basilisk Maximizer would take… :-D
It should be a snake, only with little flashing LEDs in its eyes.
The canonical basilisk paralyzes you if you look at it. Flickering lights carry the danger of triggering photosensitive epilepsy, and thus are sort of real-life basilisks. Even if the epilepsy reference is lost on many, it’s still clearly a giant snake thing with weird eyes and importantly you can probably get from somewhere without having to custom make them.
(AFAIK Little LEDs should be too small to actually represent a threat to epileptics, and it shouldn’t be any worse than any of the other flickering lights.)
EDIT: Eh, I suppose it could also be stuffed with paperclips or something, if we want to pack as many memes in as possible.
I’d buy this. We can always use more stuffies.
Yes, brilliant idea!
We can save money by re-coloring the plush Cthulhu. It’s basically the same, right? :-)
alternatively sell empty boxes labelled “Don’t look!”
It’s not a matter of “winning” or “not winning”. The phrase “damage control” was coined for a reason—it’s not about reversing the damage, it’s about making sure that the damage gets handled properly.
So seen through that lens, the question is whether EY is doing a good or bad job of controlling the damage. I personally think that having a page on Less Wrong that explains (and defangs) the Basilisk, along with his reaction to it and why that reaction was wrong (and all done with no jargon or big words for when it gets linked from somewhere, and also all done without any sarcasm, frustration, hurt feelings, accusations, or defensiveness) would be the first best step. I can tell he’s trying, but think that with the knowledge that the Basilisk is going to be talked about for years to come a standardized, tone-controlled, centralized, and readily accessible response is warranted.
I am defining winning as damage control. EY has been trying to control the damage, and in that pursuit, I’m starting to wonder if damage control, to the extent it could be considered successful by many people, is even possible.
He’s a public figure + He made a mistake = People are going to try and get mileage out of this, no matter how he handles it. That’s very predictable.
Further, it’s very easy to come along after the fact and say, “he should have done this and all the bad press could have been avoided!”
A page on LW might work. Or it might be more fodder for critics. If there were an easy answer to how to win via damage control, then in wouldn’t be quite as tricky as it always seems to be.
It’s still a matter of limiting the mileage. Even if there is no formalized and ready-to-fire response (one that hasn’t been written in the heat of the moment), there’s always an option not to engage. Which is what I said last time he engaged, and before he engaged this time (and also after the fact). If you engage, you get stuff like this post to /r/SubredditDrama, and comments about thin skin that not even Yudkowsky really disagrees with.
It doesn’t take hindsight (or even that much knowledge of human psychology and/or public relations) to see that making a twelve paragraph comment about RationalWiki absent anyone bringing RationalWiki up is not an optimal damage control strategy.
And if you posit that there’s no point to damage control, why even make a comment like that?
I didn’t posit there is no point to damage control. I’m saying that in certain cases, people are criticized equally no matter what they do.
If someone chooses not to engage, they are hiding something. If they engage, they are giving the inquisitor what he wants. If they jest about their mistake, they are not remorseful. If they are somber, they are taking it too seriously and making things worse.
I read your links and...yikes...this new round of responses is pretty bad. I guess part of me feels bad for EY. It was a mistake. He’s human. The internet is ruthless…
Let me chime in briefly. The way EY handles this issue tends to be bad as a rule. This is a blind spot in his otherwise brilliant, well, everything.
A recent example: a few months ago a bunch of members of the official Less Wrong group on Facebook were banished and blocked from viewing it without receiving a single warning. Several among them, myself included, had one thing in common: participation in threads about the Slate article.
I myself didn’t care much about it. Participation in that group wasn’t a huge part of my Facebook life, although admittedly it was informative. The point is just that doing things like these, and continuing to do things like these, accrete a bad reputation around EY.
It really amazes me he has so much difficulty calibrating for the Streisand Effect.
That was part of a brief effort on my part to ban everyone making stupid comments within the LW Facebook Group, which I hadn’t actually realized existed but which I was informed was giving people terrible impressions. I deleted multiple posts and banned all commenters who I thought had made stupid comments on them; the “hur hur basilisk mockery” crowd was only one, but I think a perfectly legitimate target for this general sweep. It’s still a pretty low-quality group, but it’s a lot better than it was before I went through and banned everyone who I saw making more than one stupid comment.
Unfortunately Facebook doesn’t seem to have an easy “delete comment and ban commenter from Group” procedure for Android, which makes it harder to repeat this procedure because Android is most of where I check Facebook.
Going around and banning people without explaining to then why you ban them is in general a good way to make enemies.
The fallout of the basilisk incidence, it should have taught you that censorship has costs.
The timing of the sweeping and the discussion about the basilisk article are also awfully coincidental.
What does “stupid” refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I’m asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I’m thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.
Thanks for letting us know what happened. I’m one of the Facebook members who were banned, and I’ve spent these months wondering what I might have done wrong. May I at least know what was the stupid thing I said? And is there any atonement procedure to get back in the Facebook group?
So just to be clear: If I say “I won’t give into the basalisk because Eliezer says I shouldn’t”, will that protect me from the basilisk? If not, what should I do?
If you believe Eliezer, then you may believe him that the basilisk has ~0 probability of occurring. (I should find a citation for that, but I read it just a few minutes ago, somewhere around the discussion of this xkcd comic.) So you are already protected from it, because it does not exist (not even in ways relevant to acausal trade).
More broadly, you should decide to take this approach: never give into blackmail by somebody who knows that you have decided to take this approach. Now they have no incentive to blackmail you, and you are safe, even if they do exist! (I think that the strategy in this paragraph has been endorsed by Eliezer, but don’t trust me on that until you get a citation. Until then, you’ll have to reason it out for yourself.)
How does that work if they precommit to blackmail even when there is no incentive (which benefits them by making the blackmail more effective)?
By “the basilisk”, do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.
Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).
Maybe so, but he can lose in a variety of ways and some of them are much worse than others.
But he did still continue to delete basilisk related discussion afterwards. As far as I understand he never apologized to Roko for deleting the post or wrote an LW post apologizing.
My response in EY’s place would probably be, “I’m a person who had trained himself to take ideas seriously [insert link on Taking Ideas Seriously]. I thought there might be a risk at the time, I acted quickly, and upon further thought it turned out I was wrong and yes, that’s fairly embarrassing in hindsight. That’s one of the pitfalls of Taking Ideas Seriously—you’re more likely to embarrass yourself. But imagine the alternative, where there really is a threat, and people kept quiet because they didn’t want to be embarrassed. From that perspective, I think that the way I acted on the spur of the moment was understandable”.
[Edit: this is apparently not what happened and there may or may not be some sort of smear campaign or something distorting everything, although I’m confused at why it was banned here then. I’m not really sure what’ actually happened now, oh well… either way, whatever actually happened, I taking a general stance of judging people mostly by accomplishments and good ideas rather than mistakes and bad ideas, except in cases of actual harm done.]
That’s not what actually happened; his first comment on the eventually-banned thread said that he didn’t believe in the threat. But yes, that would be a good response if that’s what had happened; he might have to say something like this some day.
Yeah that would be a much better response. Or alternatively get someone who is more suited to PR to deal with this sort of thing
That’s pretty much what he did here, except perhaps the tone isn’t quite so modest and has a bit of that status-regulation-blind thing Eliezer often has going on.
It’s not status blindness, it’s ego.
You could call it that, yeah.
If you were feeling uncharitable, you could say that the “lack of status regulation emotions” thing is yet another concept in a long line of concepts that already had names before Eliezer/someone independently discovers them and proceeds to give them a new LW name.
It’s sillier than that. It’s attempting to invent a new, hitherto undescribed emotion to explain behavior that’s covered perfectly well by the ordinary vocabulary of social competence, which includes for example words like “tact”. There are also words to describe neurological deviations resulting among other things in a pathological lack of tact, but they too have little to do with emotion.
(Strictly speaking, there are status-regulation emotions, and they are called things like shame and envy. But that clearly isn’t what Eliezer was talking about.)
But what Eliezer is describing is not a “new, hitherto undescribed emotion”, it’s really just a chronic, low-intensity activation of well-known emotional states like shame and embarrassment. Many people nowadays believe that ‘microaggressions’ exist and are a fairly big factor in folks’ self-esteem and even their ordinary functioning. But that too used to be a “new, undescribed phenomenon”! So why would we want to reject what Eliezer calls “status regulation” which is even less radical, being just a minor twist on what was previously known?
In the Facebook post that sparked this, Mysterious Emotion X is clearly described in terms of other-regulation: a “status slapdown emotion”. Shame and embarrassment, chronic and low-grade or otherwise, are directed at self-regulation, so they aren’t a good fit. Envy (and “a sense that someone else has something that I deserve more”, which sounds to me like resentment) is specifically excluded, so it’s not that either.
I’m pretty skeptical of the microaggression model too, but this isn’t the place to be talking about that, if there exists such a place.
Well, same difference really. An other-regarding ‘status slapdown’ emotion can be described fairly easily as a low-intensity mixture of outrage and contempt, both of which are well-known emotions and not “undescribed” at all. It could be most pithily characterized as the counter emotion to loyalty or devotion, which involves an attribution of higher status based on social roles or norms.
I don’t think either of those work. The situation in which this applies, according to Eliezer, is quite specific: another person makes a status claim which you feel is undeserved, so you feel Mysterious Emotion X toward them. It’s neither chronic nor low-grade: the context here was of HJPEV schooling his teachers and the violently poor reception that met among some readers of HPMOR. (For what it’s worth, I didn’t mind… but I was once the iniquitous little shit that Harry’s being. I expect these readers are identifying with McGonagall instead.) He’s also pretty clear about believing this to be outside the generally accepted array of human emotions: he mentions envy, hate, and resentment among others as things which this is not, which pretty much covers the bases in context.
More than the specific attribution, though, it’s the gee-whiz tone and intimation of originality that rubs me the wrong way. If he’d described it in terms of well-known emotions or even suggested that you could, my objection would evaporate. But he didn’t.
I don’t think that the thing Eliezer called “lack of status regulation emotions” that makes some people angry when they read how Harry in HPMOR interacts with teachers is what commonly called ego or lack of ego.
Fair enough. “Lack of status regulation emotions” is a bit more narrow, perhaps? Either way I see them as very similar concepts, and in the context of HPMOR readers’ anger especially so.
If someone who is high status lacks status regulation emotions they will be nice to a person with low status who seeks help from them and treats them as an equal.
That’s the opposite behavior of what’s commonly called having an ego.
More generally, someone who lacks status-regulating emotions won’t have a fragile, hypersensitive ego, i.e. what most people (though by no means all) usually mean by “having a massive ego” or an “ego problem”. Note that by this definition, many people whose self-esteem is founded in clear and verifiable achievements would be said to “lack status-regulating emotions”. In many circumstances, it’s not viewed as a negative trait.
I’ve had experience with what I think is the same thing that Eliezer called “lack of status regulation emotions”, and I do think it’s more than “narcissisticly big ego” and more than “unmotivated and unfortunate status blindness”.
It’s not that I couldn’t see the normal status levels. It’s just that I thought they were stupid and irrelevant (hah!) so I just went off my own internal status values. If you could back up your arguments, you had my respect. If you couldn’t and got defensive instead, you didn’t. And I wasn’t gonna pretend to respect someone just because everyone else thought I was out of line. Because.… well, they’re wrong. And I was totally unaware of this at the time because it was just baked into the background of how I saw things.
Good things did come of it, but I definitely stepped on toes, and in those cases it definitely came off like “big ego”.
And in a sense it was, just not in the straightforwardly narcissistic “I’m smarter than you so I don’t have to treat you with respect” way. Just in the “I’m smarter at the ‘not acting smarter than I am’ game, and that is why I don’t have to treat you with respect” way, which, although better, isn’t all that laudable either.
Ah, if the status regulation emotions go both ways, perhaps.
But Eliezer seemed to be referring to how people got angry at how Harry didn’t treat McGonagall in a manner befitting her higher status—this can be attributed to lack of status regulation emotions on the part of Harry, or Harry having a massive ego.
Harry also doesn’t have respect due to status regulation but that’s not enough to get someone reading the story angry. I personally found it quite funny. But then I also don’t put much value on that kind of status. It’s the kind of people with a strong status related emotions who get annoyed by the story.
This is a nice differentiation that I can relate to well. I also do not seem to possess status regulating emotions either (at least enough to notice myself). And I do treat all people the same (mostly cheritable) independent of their status. Actually I discovered the concept of status quite late (Ayla and the Clan of the Cave Bear if I remember right) and couldn’t make sense of it for quite some time.
Status blindness is a disability, pride is a mortal sin.
:)
Yeah I’ve read that and I feel like it’s a miss (at least for me). It’s an all together too serious and non-self deprecating take on the issue. I appreciate that in that post Eliezer is trying to correct a lot of mis perceptions at once but my problem with that is
a)a lot of people won’t actually know about all these attacks (I’d read the rational wiki article, which I don’t think is nearly as bad as Eliezer says (that is possibly due to its content having altered over time!)), and responding to them all actually gives them the oxygen of publicity. b)When you’ve made a mistake the correct action (in my opinion ) is to go “yup, I messed up at that point”, give a very short explanation of why, and try to move on. Going into extreme detail gives the impression that Eliezer isn’t terribly sorry for his behaviour. Maybe he isn’t, but from a PR perspective it would be better to look sorry. Sometimes it’s better to move on from an argument rather than trying to keep having it!
Further to that last point, I’ve foudn that Eliezer often engages with dissent by having a full argument with the person who is dissenting. Now this might be a good strategy from the point of view of persuading the dissenter: if I come in and say cyronics sux then a reasoned response might change my mind. But by engaging so thoroughly with dissent when it occurs it actually makes him look more fighty.
I’m thinking here about how it appears to outside observers: just as with a formal debate the goal isn’t to convince the person you are arguing with, it is to convince the audience, with PR the point isn’t to defeat the dissenter with your marvellous wordplay, it is to convince the audience that you are more sane than the dissenter.
Obviously these are my perceptions of how Eliezer comes across, I could easily be an exception.
Maybe he should have it going on, and damn the consequences. Sometimes you have to get up and say, these are the facts, you are wrong. Not the vapid temporising recommended by thakil.
Sometimes yes, and sometimes no.
Depends what the consequences are. Ignoring human status games can have some pretty bad consequences.
There are some times when a fight is worth having, and sometimes when it will do more harm than good. With regards to this controversy, I think that the latter approach will work better than the former. I could, of course, be wrong.
I am imaging here a reddit user who has vaguely heard of less wrong, and then reads rational wiki’s article on the basilisk (or now, I suppose, an xkcd reader who does similar). I think that their take away from that reddit argument posted by Eliezer might be to think again about the rational wiki article, but I don’t think they’d be particularly attracted to reading more of what Eliezer has written. Given that I rather enjoy the vast majority of what Eliezer has written, I feel like that’s a shame.
To you really think that’s how people discover websites?
I think it’s much more likely that someone clicks on a link to a LW post. If the post is interesting he might browse around LW and if he finds interesting content he will come back.
Not everyone. But I think an xkcd comic about the AI box experiment would be an opportunity to let everyone know about less wrong, not to have another argument about the basilisk which is a distraction.
“Damn the consequences” seems like an odd thing to say on a website that’s noted for its embrace of utilitarianism.
The expression “Damn the consequences” is generally, and in this case, a hyperbole. The consequences being dismissed are those the speaker considers worthy of dismissal in the face of the consequences that truly matter.
A non-figurative version of my comment would be that in the case at hand, putting the actual facts out, as clearly and forthrightly as possible, is the most important thing to do, and concern with supposed reputational damage from saying what is right and ignoring what is irrelevant would be not merely wasted motion, but actively harmful.
But then, I’ll excuse quite a lot of arrogance, in someone who has something to be arrogant about.
If it decreases the number of people who take you seriously and therefore learn bout the substance of your ideas its a bad strategy
And if it increases the number of people who take you seriously, and therefore learn about the substance of your ideas, it’s a good strategy. I’m sure we can all agree that if something were bad, it would be bad, and if it were good, it would be good. Your point?
I think there are potential benefits to both methods, and I also don’t think that they’re necessarily mutually exclusive strategies. At the moment, I would lean towards pure honesty and truth oriented explanation as being most important as well. I also think that he could do all of that while stilll minimizing the ‘status smackdown response’, which in that reddit post he did a little of, but I think it’s possible that he could have done a little more while still retaining full integrity with regards to telling it like it is.
But whatever happens, anything is better than that gag order silliness.
I wonder if Eliezer will have to be on damage control for the basilisk forever. 4 years on, and it still garners interest.
Of course he will be. Therefore he should consider getting not-terrible at it. Well, I spy with my little eye an xkcd forum post by EY, so let’s see...
Does MIRI have a public relations person? They should really be dealing with this stuff. Eleizer is an amazing writer but he’s not particularly suited to addressing a non-expert crowd
Still failing to do it right. “But we are doing math!” is sort of orthogonal to what makes Roko’s basilisk so funny.
What would doing it right entail?
I am no PR specialist, but I think relevant folks should agree on a simple, sensible message accessible to non-experts, and then just hammer that same message relentlessly. So, e.g. why mention “Newcomb-like problems?” Like 10 people in the world know what you really mean. For example:
(a) The original thing was an overreaction,
(b) It is a sensible social norm to remove triggering stimuli, and Roko’s basilisk was an anxiety trigger for some people,
(c) In fact, there is an entire area of decision theory involving counterfactual copies, blackmail, etc. behind the thought experiment, just as there is quantum mechanics behind Schrodinger’s cat. Once you are done sniggering about those weirdos with a half-alive half-dead cat, you might want to look into serious work done there.
What you want to fight with the message is the perception that you are a weirdo cult/religion. I am very sympathetic to what is happening here, but this is, to use the local language, “a Slytherin problem,” not “a Ravenclaw problem.”
I expect in 10 years if/when MIRI gets a ton of real published work under its belt, this is going to go away, or at least morph into “eccentric academics being eccentric.”
p.s. This should be obvious: don’t lie on the internet.
Yes.
Further: If you search for “lesswrong roko basilisk” the top result is the RationalWiki article (at least, for me on Google right now) and nowhere on the first page is there anything with any input from Eliezer or (so far as such a thing exists) the LW community.
There should be a clear, matter-of-fact article on (let’s say) the LessWrong wiki, preferably authored by Eliezer (but also preferably taking something more like the tone Ilya proposes than most of Eliezer’s comments on the issue) to which people curious about the affair can be pointed.
(Why haven’t I made one, if I think this? Because I suspect opinions on this point are strongly divided and it would be sad for there to be such an article but for its history to be full of deletions and reversions and infighting. I think that would be less likely to happen if the page were made by someone of high LW-status who’s generally been on Team Shut Up About The Basilisk Already.)
Well, I think your suggestion is very good and barely needs any modification before being put into practice.
Comparing what you’ve suggested to Eliezer’s response on the comments of xkcd’s reddit post for the comic, I think he would do well to think about something along the lines of what you’ve advised. I’m really not sure all the finger pointing he’s done helps, nor the serious business tone.
This all seems like a missed opportunity for Eliezer and MIRI. XKCD talks about about the dangers of superintelligence to its massive audience, and instead of being able to use that new attention to get the word out your organisation’s important work, the whole thing instead gets mired down in internet drama about the basilisk for the trillionth time, and a huge part of a lot of people’s limited exposure to LW and MIRI is negative or silly.
I think that your suggestion is good enough that I’ve posted it over on the xkcd threads with attribution. (I’m pretty certain I have the highest xkcd postcount of any LWer, and probably people there remember my name somewhat favorably.)
Ah yes, trying to do the same thing over and over and expecting a different result.
Serious replies DO NOT WORK. Eliezer has already tried it multiple times:
https://www.reddit.com/r/Futurology/comments/2cm2eg/rokos_basilisk/cjjbqv1
http://www.reddit.com/r/Futurology/comments/2cm2eg/rokos_basilisk/cjjbqqo
and his last two posts on reddit (transient link, not sure how to link to the actual replies): http://www.reddit.com/user/EliezerYudkowsky
A better way to stop people pointing and laughing is to do it better than them. Eliezer could probably write something funny along the lines of “I got Streisanded good, didn’t I? That’ll learn me!” Or something else, as long as it is funnier than xkcd or smbc can possibly come up with.