[LINK] Another “LessWrongers are crazy” article—this time on Slate
WARNING: Memetic hazard.
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html?wpisrc=obnetwork
Is there anything we should do?
Meet 10 new people (over a moderately challenging personal specific timeframe).
Express gratitude or appreciation.
Work close to where we live.
Have new experiences.
Get regular exercise.
ie. No. Nothing about this article comes remotely close to changing the highest expected value actions for the majority of the class ‘we’. If it happens that there is a person in that class for whom this opens an opportunity to create (more expected) value then it is comparatively unlikely that that person is the kind who would benefit from “we shoulding” exhortations.
I want this list posted in response to every “is there anything we should do” ever. Just all over the internet. I would give you more than one upvote just for that list if I could.
(source)
Writing a bot to do this isn’t that hard and you can always rethink the list to optimize it for a moral general audience.
Laugh, as the entire concept (and especially the entire reaction to it by Eliezer and people who take the ‘memetic hazard’ thing seriously) is and always has been laughable. It’s certainly given my ab muscles a workout every now and then over the last three years… maybe with more people getting to see it and getting that exercise it’ll be a net good! God, the effort I had to go through to dig through comment threads and find that google cache...
This is also such a delicious example of the Streisand effect...
Yes, Eliezer’s Streisanding is almost suspiciously delicious. One begins to wonder if he is in thrall to… well, perhaps it is best not to speculate here, lest we feed the Adversary.
Hanlon’s Beard.
There.
I think this is also a delicious example of how easy it is to troll LessWrong readers. Do you want to have an LW article and a debate about you? Post an article about how LW is a cult or about Roko’s basilisk. Success 100% guaranteed.
Think about the incentives this gives to people who make their money by displaying ads on their websites. The only way we could motivate them more would be to pay them directly for posting that shit.
This isn’t a particularly noteworthy attribute of Less Wrong discussion; most groups below a certain size will find it interesting when a major media outlet talks about them. I’m sure that the excellent people over at http://www.clown-forum.com/ would be just as chatty if they got an article.
I suppose you could say that it gives journalists an incentive to write about the groups below that population threshold that are likely to generate general interest among the larger set of readers. But that’s just the trivial result in which we have invented the human interest story.
I was going to say precisely that. In the end, banning Roko’s post was pointless and ineffectual: anyone with internet access can learn about the “dangerous” idea. Furthermore, it’s still being debated here, in a LessWrong thread. Is any of you having nightmares yet?
On the other hand, does not banning these debates contribute to having less of them? Doesn’t seem so. We already had a dozen of them, and we are going to have more, and more, and more...
I can’t know what happened in the parallel Everett branch where Eliezer didn’t delete that comment… but I wouldn’t be too surprised if the exposure of the basilisk was pretty much the same—without the complaining about censorship, but perhaps with more of “this is what people on LW actually believe, here is the link to prove it”.
I think this topic is debated mostly because it’s the clever contrarian thing to do. You have a website dedicated to rationality and artificial intelligence where people claim to care about humanity? Then you get contrarian points for inventing clever scenarios of how using rationality will actually make things go horribly wrong. It’s too much fun to resist. (Please note that this motivated inventing of clever horror scenarios is different from predicting actual risks. Finding actual risks is a useful thing to do. Inventing dangers that exist only because you invented and popularized them, not very useful.)
The debates are not technically, banned, but there are still strict limits on what we’re allowed to say. We cannot, for instance, have an actual discussion about why the basilisk wouldn’t work.
Furthermore, there are aspects other than the ban that make LW look bad. Just the fact that people fall for the basilisk makes LW look bad all by itself. You could argue that the people who fall for the basilisk are mentally unstable, but having too many mentally unstable people or being too willing to limit normal people for the sake of mentally unstable people makes us look bad too. Ultimately, the problem is that “looking bad” happens because there are aspects of LW that people consider to be bad. It’s not just a public relations problem—the basilisk demonstrates a lack of rationality on LW and the only way to fix the bad perception is to fix the lack of rationality.
One of the problems is that the basilisk is very weird, but the prerequisites—which are mostly straight out of the Sequences—are also individually weird. So explaining the basilisk to people who haven’t read the Sequences through a few times and haven’t been reading LessWrong for years is … a bit of work.
Presumably, you don’t believe the basilisk would work.
If you don’t believe the basilisk would work, then it really doesn’t matter all that much that people don’t understand the prerequisites. After all, even understanding the prerequisites won’t change their opinion of whether the basilisk is correct. (I suppose that understanding the sequences may change the degree of incorrectness—going from crazy and illogical to just normally illogical—but I’ve yet to see anyone argue this.)
Are you saying it’s meaningless to tell someone about the prerequisites—which, as I note, are pretty much straight out of the Sequences—unless they think the basilisk would work?
It’s not meaningless in general, but it’s meaningless for the purpose of deciding that they shouldn’t see the basilisk because they’d misunderstand it. They don’t misunderstand it—they know that it’s false, and if they read the sequences they’d still know that it’s false.
As I pointed out, you could still argue that they’d misunderstand the degree to which the basilisk is false, but I’ve yet to see anyone argue that.
I’ve said precisely this in the past, but now I’m starting to second-guess myself. Sure, if we’re worried about basilisk-as-memetic-hazard then deletion was an obvious mistake… but are any of you having nightmares yet? I’m guessing “no”, in which case we’re left with basilisk-as-publicity-stunt, which might actually be beneficial.
I wouldn’t personally have advocated recruiting rationalists via a tactic like “Get a bunch of places to report on how crazy you are, then anyone who doesn’t believe everything they read will be pleasantly surprised”, but I’m probably the last person to ask about publicity strategy. I also would have disapproved of “Write Harry Potter fan fiction and see who wants to dig through the footnotes”, without benefit of hindsight.
Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.
If an AI article is used, I nominate sorting pebbles into correct heaps.
Or perhaps a publicity boost would be better utilized by directing traffic to effective altruism information, e.g. at givewell or 80000 hours.
Maybe all of them together… because we have more than one topic here, and different people are interested in different things.
Something about AI math. Something about values. Something about improving your life. Something about helping others. Something about spreading rationality.
Let’s check featured articles on the main page on 19 July 2014....and...there we go.
“Eliezer Yudkowsky Facts” as a featured article. Wow, that’s certainly one way to react to this kind of criticism.
(I approve.)
Re-reading that post, I came upon this entry, which seems particularly relevant:
Assuming we can trust the veracity of this “fact”, I think we have to begin to doubt Eliezer’s rationality. I mean, sure, the Streisand effect is a real thing, but causing Roko’s obscure thought experiment to become the subject of the #1 recently most read article on Slate, just by censoring it? Is that really realistic?
...
Seriously, did anyone actually see something like this coming?
I like this idea. Of course, right now the top thing on discussion is this thread, so here is probably as good a place as any to link the good stuff. One of my own personal favorites: Yvain’s Diseased Thinking. Or, more or less the same point in much more concise form: Eliezer’s Disguised Queries.
Also, the top-ranked posts page, which I’m not sure is linked anywhere obvious.
The problem isn’t that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko’s post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko’s post. Someone in the audience makes the following statement:
Or consider the following comment by Ben Goertzel from 2004:
And this is Yudkowsky’s reply:
LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.
And become just another procrastination website.
Okay, there is still CFAR here. Oh wait, they also have Eliezer in the team! And they believe they can teach the rest of the world to become more rational. How profoundly un-humble or, may I say, cultish? Scratch the CFAR, too.
While we are at it, let’s remove the articles “Tsuyoku Naritai!”, “Tsuyoku vs. the Egalitarian Instinct” and “A Sense That More Is Possible”. They contain the same arrogant ideas, and encourage the readers to think likewise. We don’t want more people trying to become awesome, or even worse, succeeding at that.
Actually, we should remove the whole Sequences. I mean, how could we credibly distance ourselves from Eliezer, if we leave hundreds of his articles as the core of this website? No one reads the Sequences anyway. Hell, these days no one even dares to say “Read the Sequences” anymore. Which is good, because telling people to read the Sequences has been criticized as cultish.
There are also some poisonous memes that make people think bad of us, so we should remove them from the website. Specifically: humans could be smarter than Einstein, machines could be smarter than humans, many worlds in quantum physics, atheism… I probably forgot some.
Oh, don’t forget to remove the “bragging threads”! Seriously, how immature.
Become?
What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky’s cached thoughts.
LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.
To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.
I am not talking about censorship here. I am talking about something unproblematic. Since once the aim of LessWrong is clear, to tackle technical problems, moderation becomes an understandable necessity. And I’d be surprised if any moderation will be necessary once only highly technical problems are discussed.
Doing this will make people hold LessWrong in high esteem. Because nothing is as effective at proving that you are smart and rational than getting things done.
ETA How about trying to solve the Pascal’s mugging problem? It’s highly specific, technical, and does pertain rationality.
I guess I agree with you on some more meta level. LessWrong as it is now, is not optimal. (Yeah, it is very cheap to say this; the problem is coming to a solution and an agreement about how specifically would the optimal version look like.) LessWrong as it is now is a result of a historical process, and technical limitations given by almost unmaintainability of Reddit code. If we tried to design it from the scratch, we would certainly invent something different, with the experience we have now.
But I guess a part of the problem is general for web discussions, and seems to me somehow analogical to Gresham’s law: “lower-quality content drives out higher-quality content”. Specifically, people say they prefer higher-quality content, but they also want quantity on demand. However high quality there would be on a website, if people come a week later and find no new content, they will complain. But if there is a new content every week, people will learn to visit the site more often, and then they will complain about not having new content every day. There will never be enough. And the supply of the high-quality content is limited. If the choice is given to readers, at some point they will express preference for more content, even if it means somewhat lower quality. And then again, and again, until the quality drops down dramatically, but each single step felt like a reasonable trade-off.
There is also a systematic bias, that people who spend more time procrastinating online have more voice in online debates… for the obvious reasons. So the community consensus for “how much new content per day or per week do we actually need?” will be mostly given by the greatest online procrastinators, which means the answer will pretty much always be “more!”
So it would seem the solution for keeping the quality level is to remain very selective in accepting new content, even when it is met with disapproval of majority of the community. Which will provide not just anger, but also hundreds of rationalizations. (If we don’t have in average three new articles in Discussion every day, it means LessWrong is dying, and something must be done! Let’s post all Open Thread comments as separate articles.) But there is another problem...
It’s not just about readers, but also about writers. Writers want readers. Also, new writers are born from (a small minority of) readers. When the readers move to a different place, the old writers will start feeling lonely. And the new writers, they will publish even their high-quality content at the new place, because now this is their place. The tribe has moved elsewhere.
Something similar to what you want already exists. It’s here: the MIRI blog. But there is no debate there, because the tribe is not there: it’s at LessWrong. -- Okay, this is probably not exactly what you wanted. But my point is: Imagine that tomorrow, LW will split into two website: LW1 will contain exactly what you want, and LW2 will contain everything else. At the first moment, you will be satisfied. Most of readers will move to LW2, because there will be more content and more debate. They will check LW1 homepage once in a day, then once in a week, then once in a few months. And then, gradually, even the LW1 writers will slowly switch to publishing on LW2… because that’s where most of the readers will be; and the authors want to have readers. And after some time, the LW1 will be practically dead, and LW2 will be exactly what Less Wrong is now, and with the same complaints.
What if we link LW1 and LW2 together, so that everyone who has a user account on LW2 will automatically have a user account on LW1, and also LW2 homepage will display new articles on LW1 and vice versa. That will keep the lower-quality debate on LW2 and yet every new content on LW1 will immediately attract all LW2 readers. Writing for LW1 will have the same audience as LW2, just higher status! Isn’t that a best-of-both-worlds solution?
Unfortunately, I have just reinvented “Main” and “Discussion”. Which, as we already know, is not satisfying.
At this moment, I simply don’t know what to do anymore. I mean, I could try to come up with some plausible-sounding ideas, but I don’t trust them anymore.
It seems to me that what XiXiDu wants are not just any high quality posts and the classification of Lesswrong posts into high and low quality buckets fails to capture what he/she tried to convey. It seems to me that XiXiDu talked about the lack of problem solving posts, the typical titles of which could be something like “Problem 123: Let’s brainstorm for possible angles how to attack it” or “Problem 456: Let’s try an unexpected approach 789 and see if it leads somewhere” (not unlike the aforementioned Polymath Project or maybe even Mathoverflow). Currently neither “Main” (which is mostly about presenting arguments that are already polished), nor “Discussion” (which is a mish mash of mostly links, open threads and posts, that are considered too short to be posted in Main) contains many posts of such type.
One solution might be a reputation net—people who liked this also liked that. With luck, there’d be a cluster of people who want the same sort of thing you do.
Why don’t we?
The Useful Idea of Truth
According to the Slate article,
Uh, no. Surprisingly few “rich dudes” have shown an interest in cryonics. I know quite a few cryonicists and I have helped to organize cryonics-themed conferences, and to the best of my knowledge no one on the Forbes 500 list has signed up.
Moreover ordinary people can afford cryonics arrangements by using life insurance as the funding mechanism.
We can see that rich people have avoided cryonics from the fact that the things rich people really care about tend to become status signals and attract adventuresses in search of rich husbands. In reality cryonics lacks this status and it acts like “female Kryptonite.” Just google the phrase “hostile wife phenomenon” to see what I mean. In other words, I tell straight men not to sign up for cryonics for the dating prospects.
Peter Thiel seems to be on the Forbes 500 list. Are you arguing that he isn’t signed up for cryonics? Are you saying he simply isn’t attending those conferences?
Can you give examples?
Ordinary people seem be in thehabit of frittering their life insurance away in their descendants. Perhaps cryoni.cs is for the moderately well off and single.
To be fair, the article got lots of things wrong.
This really is not a friendly civilization is it.
If Langford basilisks actually existed, Gawker would be the first site to use them.
I find belief in basilisks ridiculous. Arguing that an idea could do harm by merely occupying space in a brain is a tremendous discredit to humanity. Any adult brain that is so vulnerable as to suffer actual emotional damage by the mere contemplation of an idea is a brain accustomed to refusing to deal with reality. If the fact is that The Basilisk wants to torture me, I want to believe that it wants to torture me.
WARNING: THE LITANY OF TARSKI IS NOT DESIGNED TO WORK INSIDE A FEEDBACK LOOP
In this example the basilisk will want to torture you only if you believe that it will want to torture you. “The fact” is not a fact until the loop is complete. Note that both alternatives are “facts”, even though they appear mutually exclusive.
Interesting. I’ll need to think about this.
Or, alternatively, need to not think about this.
If we’re talking about Langford-type basilisks, that’s a reasonable position. But if you’re claiming that no idea can cause disutility, I find that idea to be ridiculous. And you arguing against an idea on the basis that it would be insulting to humanity is rather … ironic.
This is such a Critical Empathy Fail that I can barely take you seriously. Here’s a hint: original sin.
Not only is the original sin fictional evidence; its presupossitions about human nature are worthy of being taken far less seriously than my worst nonsense. The whole “things humans are not meant to know” theme goes too much beyond the necessary caution our flaws demand; it’s blatantly misanthropic.
I don’t think he’s referring to the fruit of knowledge of good and evil, I think he’s referring to the doctrine of original sin itself, which he’s suggesting caused harm by occupying space in Christian brains.
Although belief in the inherent brokenness of humanity is a poisonous meme that can, at its worst, make you hate yourself and mess with the happiness of others, I see it as different from a basilisk. You can still function and lead a goal-driven life while under the influence of religious dogma. It does not paralyze you with terror the way a basilisk is reputed to do.
Unless the definition of “actual harm” was meant to contain this sentiment, it’s sufficient for a counterexample to the general point you expressed above to merely be an idea that causes harm by occupying space in a brain, regardless of whether it’s a true basilisk or not.
I agree that many beliefs about basilisks are ridiculous. Especially beliefs about what the correct decisions to make are in response to various scenarios. It would be a mistake to not believe that there is a particular failure mode that an AI creating civilisation could have which resulted in scenarios referred to as Roko’s Basilisk. It isn’t even an all that remarkable or unusual failure mode. Just a particular instance of extorting those vulnerable to extortion in the name of “the greater good”.
The mistake here is ‘merely’. I can think of reasons why I would not want my covert assets to each have knowledge of all the other asset’s false identities. The presence of that information could cause (allow) other agents to do harm. This isn’t particularly different in means of action.
I thought the article was quite good.
Yes it pokes fun at lesswrong. That’s to be expected. But it’s well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn’t agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb’s problem. I could give that explanation to my grandma and she’d understand it.
I don’t generally believe that “any publicity is good publicity.” However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that’s really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.
I’m not sure what people’s expectations are for free publicity but this is, IMO, best case scenario.
From a technical standpoint, this bit:
Seems wrong. Omega wouldn’t necessarily have to simulate the universe, although that’s one option. If it did simulate the universe, showing sim-you an empty box B doesn’t tell it much about whether real-you will take box B when you haven’t seen that it’s empty.
(Not an expert, and I haven’t read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)
And I think the final two paragraphs go beyond “pokes fun at lesswrong”.
It is wrong in about the same way that highschool chemistry is wrong. Not one of the statements is true but the error seems to be one of not quite understanding the details rather than any overt misrepresentation. ie. I’d cringe and say “more or less”, since that’s closer to getting Transparent Newcomb’s right than I could reasonably expect from most people.
The other options work out the same as simulating the universe for the purpose of telling you how you should decide to behave, but “simulating the universe” makes it visceral and easy to imagine.
Yes, they’re a caution about reason as memetic immune disorder. The money quote for the whole article is:
Of course, mentioning the articles on ethical injuctions would be too boring.
Here comes the Straw Vulcan’s younger brother, the Straw LessWrongian. (Brought to you by RationalWiki.)
It’s troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should “shut up and multiply, to trust the math even when it feels wrong”. On the other hand Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”.
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don’t justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying “Oops”. On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.
Wow, these are very interesting examples!
Okay, for me the whole paradox breaks down to this:
I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can’t trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better.
Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn’t do even if they seem rational. Now I’m kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act.
From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn’t even exist. It doesn’t help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
Expounding at length on dust specks vs torture, shut up and multiply and “taking ideas seriously” is likely to make people look askance at you, even if you also add ”… but don’t do anything weird, OK?” on the end.
Looks like a fairly standard parable about how we should laugh at academic theorists and eggheads because of all those wacky things they think. If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.
Giving people the chance to show up and explain that this community is Obviously Wrong And Here’s Why is a pretty good way to start conversations, human nature being what it is. An opportunity to have some interesting dialogues about the broader corpus.
That said, I am in the camp that finds the referenced ‘memetic hazard’ to be silly. If you are the sort of person who takes it seriously, this precise form of publicity might be more troubling for the obvious ‘hazard’ reasons. Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?
Vanishingly small—the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don’t remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody’s time and attention (as community drama over moderation almost always is).
‘Moderation’ was precisely the opposite of the response that occurred. Hysterical verbal abuse is not the same thing as deleting a post and mere censorship would not have created such a lasting negative impact. While ‘moderator censorship’ was technically involved the incident is a decidedly non-central member of that class.
Nearly four years ago to the day, going by RationalWiki’s chronology.
Talking about it presumably makes it feel like a newer, fresher issue than it is.
Eliezer specifically denied the possibility of a basilisk, although no theory of acausal blackmail in reflective equilibrium exists yet.
Roko’s post was deleted because of how people reacted to it, not because it was a real memetic hazard.
ETA: on a second review, that’s the reason Yudkowsky gave after the fact. I’m not convinced it was his initial motivation.
Surely there’s some non-zero possibility of acausal blackmail?
Well, I guess the standard caveat applies here: there’s nothing that has really 0 chance of happening.
I don’t know about, but if it turned out acausal blackmail was logically impossible, that would deserve a probability as small as we can allow ourselves.
Isn’t this precisely what TDT solves?
I sincerely have no idea. I don’t even know if TDT stands on its own as a completed theory.
I’d say it about as much of a risk as a self-loathing basilisk who punishes only people who supported its creation. It’s wrong in the same way Pascal’s Wager is wrong, with some extra creepiness added.
Given that nobody else ever complained, AFAIK, it seem that he was the only person troubled by that post.
EDIT: not.
I got email from basilisk victims, as noted elsewhere in this thread (this is why I created the RW article, ’cos individual email doesn’t scale).
Point taken.
Not sure I agree with your point. There’s a standard LW idea that smart people can believe in crazy things due to their environment. For “environment” you can substitute “non-LW” or “LW” as you wish.
That’s a valid point.
To the extent that the article is narrowly targeted at this website, it could be read as an ‘expose’ on groupthink or the dangers of epistemic closure. That’s a more charitable reading. But consider sentences like: “What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.” Which is to say, the author seems to be using LW as a centered example of the social category “intellectuals with a focus on science and technology”, rather than using LW as a test case of a community with unusual conventions.
From what I’ve seen, it seems like very few people who know the basilisk believe it (<10 maybe?), but there are some people (still not a lot, but significantly more than 10), who avoid the basilisk just in case it is dangerous, because of EY’s reaction.
It sounds like the actual unusual paradigm on LW is not so much “worried about the basilisk” as it is “unusually accommodating of people who worry about the basilisk”.
Modest proposal: practice “ontological rejection therapy” to decrease worry about basilisks, etc.:
Shout or type statements intended to draw punishment from every conceivable supernatural or post-Singularity entity.
Negative result: Nothing happens. Gain increased sanity and epistemological confidence.
Positive result: Devoured by Mind Flayers or equivalent. Surviving peers gain experimental data of immense value.
I was getting email from LW readers obsessing and worried by the basilisk, even though they knew intellectually it was a silly idea, and unable to talk about it on LW. That’s why I started the RW article (which, btw, this Slate article neither mentions nor links to), because individual email doesn’t scale. None since that.
I had a similar but much lesser reaction (mildly disquieting) to the portrait of Hell given in the Portrait of the Artist as a Young Man. I found the portrait had strong immediate emotional impact. Good writing.
More strangely, even though I always had considered the probability that Hell exists as ludicrously tiny, it felt like that probability increased from the “evidence” of a fictional story.
Likely all sorts of biases involved, but is there one for strong emotions increasing assigned probability?
I found Ian M. Banks’ Surface Detail to be fairly disturbing (and I’m in the Roko’s-basilisk-is-ridiculous camp); even though the simulated-hell technology doesn’t currently exist (AFAWK), having the salience of the possibility raised is unpleasant.
Surface Detail’s portrayal of Hell struck me as ugly and vulgar but not very disturbing. Some of the gratuitous nastiness in Consider Phlebas was worse, for example (the Eaters scene in particular), and so were some of Vatueil’s simulated battle scenes; I think they came across as more salient because they didn’t map onto cultural tropes I’d already rejected, and because they didn’t come across as being scripted for a quasi-political morality play.
I also was disgusted by “Player of Games”, the only Banks novel I read. Is all of Banks’ writing like this?
My first Banks was The Wasp Factory, which is pretty much a tour de force of tastelessness; all further examples are much less severe.
He tends gave one scene of severe nastiness every book.
I don’t recall anything disgusting in Player of Games?
What about when the protagonist visits the brothel?
The alien culture in that one is all about brutality and domination. I didn’t see a point to reading it, unless you like reading about fantasy violence.
In the context of the Culture novels, the polite way of putting it would be that Banks had a penchant for using scenes of extreme horror and depravity as contrast to the utopian aspects of his writing, not to mention the SF spy games and gun porn. I’ve never read a novel of his that didn’t have at least some of the same, though, and I’ve read some of his non-SF work.
I think people totally privilege hypotheses they’ve read about in compelling fiction. It’s taking on board fictional evidence. I find it helps to keep in mind that a plausible story has too many details to be probable—“plausible” and “probable” are somewhat opposites—though it’s harder to remember for a compelling story.
When reporters interviewed me about Bitcoin, I tried to point to LW as a potential source of stories and described in a positive way. Several of them showed interest, but no stories came out. I wonder why it’s so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn’t done much yet except publish a few manifestos.
There’s an easy answer and a hard answer.
The easy answer is that, for whatever reason, the media today is far more likely to run a negative story about the tech industry or associated demographics than to run a positive story about it. LW is close enough to the tech industry, and its assumed/stereotyped demographic pattern is close enough to that of the tech industry, that attacking it is a way to attack the tech industry.
Observe:
“highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality … techno-futurism … high-profile techies like Peter Thiel … some very influential and wealthy scientists and techies believe it … computing power … computers … computer … mathematical geniuses Stanislaw Ulam and John von Neumann … The ever accelerating progress of technology … Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil … exponential increases in computing power … Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes … the machine equivalent of God … rational action … a smattering of parallel universes and quantum mechanics on the side … supercomputer … supercomputer … supercomputer … supercomputer … autism … Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies … messianic ambitions, being convinced of your own infallibility, and a lot of cash”
Out of the many possible ways to frame the article, Slate chose to make it about “rich techies”. Why postulate that Omega has a supercomputer? Why repeat the word ‘supercomputer’ four times in five sentences? The LW wiki doesn’t mention computers of any sort, and the Wikipedia article only uses the word ‘computer’ twice. advancedatheist said above that the cryonics claim is false; assuming he’s right, why include a lie?
It’s clearly not a neutral explanation of the Basilisk—and it fits into a pattern.
The hard answer would include an explanation of this pattern. (I’m not sure whether it would be a good idea to speculate about this in this particular thread, so: anyone who’s tempted to do so, take five minutes and think over the wisdom of it beforehand.)
Personal contact via the people employed in the Wired magazine and a lot of hackers are quite strong. Wired had actually an intention of pushing projects like Cypherpunks or in the last years the Quantified Self movement which they essentially founded (Keven Kelly and Gary Wolf are both Wired Editors).
I don’t think that LW is really the place that needs positive PR. I can’t really think of a story about LW that I want to tell a reporter. I can think of stories about MIRI or about CFAR but LW itself doesn’t need PR.
That’s a great point. LW is not MIRI. LW comments are not MIRI research. LW moderation policy is not FAI source code. Etc.
The proper response to basilisk would probably be: “So, tell me about the most controversial comment ever in your web discussions. You know, just so I can popularize it as the stuff your website is really about.”
I don’t think the idea is that LW is about the basilisk, but rather that the nature of the basilisk exposes flaws of LW. Whether it does that depends on circumstances; while it’s trivially true that any website has a most controversial comment, not every website has a most controversial comment that happened like the basilisk did.
The basilisk seems to pretty much be the first thing outsiders know to associate with LW these days.
Well, for Charlie Stross it’s practically professional interest :P
Anyhow: anecdote. Met an engineer on the train the other day. He asked me what I was reading on my computer, I said LW, he said he’d heard some vaguely positive things, I sent him a link to one of Yvain’s posts, he liked it.
Eliezer Yudkowsky’s reasons for banning Roko’s post have always been somewhat vague. But I don’t think he did it solely because it could cause some people nightmares.
(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):
…and further…
His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?
(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:
If Yudkowsky really thought it was irrational to worry about any part of it, why didn’t he allow people to discuss it on LessWrong, where he and others could debunk it?
“Doesn’t work against a perfectly rational, informed agent” does not preclude “works quite well against naïve, stupid newbie LW’ers that haven’t properly digested the sequences.”
Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.
Good point. To build on that here’s something I thought of when trying (but most likely not succeeding) to model/steelman Eliezer’s thoughts at the time of his decision:
At least Eliezer’s move has focused all attention on the current (and easily debunked) basilisk, and it has made it sufficiently low-status to try and think of a better one. So in this sense it could even be called a success.
I would not call it a success. Sufficiently small silver linings are not worth focusing on with large-enough clouds.
There were several possible fairly-good reasons for deleting that post, and also fairly good reasons for giving Eliezer some discretion as to what kind of stuff he can ban. Going over those reasons (again) is probably a waste of everybody’s times. Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?
XiXiDu cares about every Eliezer potential-mistake.
We’re discussing an article that judges LW for believing in the basilisk. Whether the founder believes in the basilisk is a lot more pertinent to judging LW than whether some randomly chosen person on LW believes in it, so there’s a good reason to discuss Eliezer’s belief specifically.
Mainstream: someone using Roko’s basilisk as a passing metaphor on an Australian rules football forum.
Both the article and the comments give me hope. My impression is that they treat these things more seriously than we would have seen in, e.g., 2006. This is despite the author’s declared snarkiness.
Less Wrong is getting mainstream!
This really doesn’t deserve all that much attention (It’s blatant fear-mongering. If you’re going to write about the Basilisk, you should also explain Pascal’s mugging as a basic courtesy.), but there’s one thing that this article makes me wonder:
I occasionally see people saying that working on Friendly AI is a waste of time. Yet at the same time it seems very hard to ignore the importance of existential risk prevention. I haven’t seen a lot of good arguments for why an AGI wouldn’t be potentially dangerous. So why wouldn’t we want some people working on FAI? There’s a lot of existential risk, not everyone can work on the same one.
I also disagree on the comparison to Roko’s Basilisk and Newcomb’s problem. With a thought-experiment, you need to make some assumptions, such as the scenario being true. It’s meaningless to talk about Newcomb’s if you don’t assume Omega exists (within the context of the thought-experiment). Roko’s Basilisk, on the other hand, is about how we should act in real life. This changes a lot of variables. If we propose a thought-experiment in which the Basilisk actually exists, the comparison would fly.
Also, in Newcomb’s problem, the goal is to go away with as much money as possible. So it’s obvious what to optimize for.
What exactly is the goal with the Basilisk? To give as much money as possible, just to build an evil machine which would torture you unless you gave it as much money as possible, but luckily you did, so you kinda… “win”? You and your five friends are the selected ones who will get the enjoyment of watching the rest of humanity tortured forever? (Sounds like how some early Christians imagined Heaven. Only the few most virtuous ones will get saved, and watching the suffering of the damned in Hell will increase their joy of their own salvation.)
Completely ignoring the problem that just throwing a lot of money around doesn’t solve the problem of creating a safe recursively self-improving superhuman AI. (Quoting Sequences: “There’s a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can’t get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.”) So these guys working on this evil machine… hungry, living in horrible conditions, never having a vacation or going on a date, never seeing a doctor, probably having mental breakdowns all the time; because they are writing the code that would torture them if they did any of that… is this the team we could trust with doing sane and good decisions, and getting all the math right? If no, then we are pretty much fucked regardless of whether we donated to the Basilisk or not, because soon we are all getting transformed to paperclips anyway; the only difference is that 99.9999999% of us will get tortured before that.
How about, you know, just not building the whole monster at the first place? Uhm… could the solution to this horrible problem really be so easy?
Yes.
No. All people who never heard of the Basilisk argument would also live in heaven. Even all people who heard of it in a way where it was clear that they wouldn’t take it seriously would live in heaven.
That isn’t necessarily true. The kind of reasoning assumed in the Basilisk uFAI would also use the ‘innocents’ as hostages if it would help to extort compliance from the believers. It depends entirely on the (economic power weighted aggregate) insanity of the ‘suckers’ the uFAI is exploiting.
The basilisk get’s more compliance from the believers when he puts the innocents into heaven then when he puts them into hell. Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.
This is also the point, where you might think about how Eliezer’s censorship had an effect. His censuring did lead you and Viliam_Bur to have an understanding of the issue where you think it’s about an UFAI.
This is at best not clear. It depends on the specific nature of the insanity in the compliant. Note that brutally disincentivizing evangelism has… instrumental downsides.
Don’t be misled by the loose relationship with Pascal’s Wager. This isn’t about belief, it is about decisions (and counterfactual decisions).
The use of the term uFAI is deliberate, and correct. We don’t need to define a torture-terrorist as Friendly just because of some sloppy utilitarian reasoning. Moreover, any actual risk from the scenario comes from AGI creators (or influencers) that make this assumption. That’s the only thing that can cause the torture to happen.
You are overconfident in your mind reading skills. I was one of the few people who were familiar enough with the subject matter at the time when Roko was writing his (typically fascinating) posts that I categorised the agent as a plausible not-friendly AGI immediately, the scenario as an interesting twist on acausal extortion then went straight to thinking about the actual content of the post, which was about a new means of cooperation.
Roko’s post explicitly mentioned trading with unfriendly AI’s.
yeah, the horror lies in the idea that it might be morally CORRECT for an FAI to engage in eternal torture of some people.
There is this problem with human psychology that threatening someone with torture doesn’t contribute to their better judgement.
If threatening someone with eternal torture would magically raise their intelligence over 9000 and give them ability to develop a correct theory of Friendliness and reliably make them build a Friendly AI in five years… then yes, under these assumptions, threatening people with eternal torture could be the morally correct thing to do.
But human psychology doesn’t work this way. If you start threatening people with torture, they are more likely to make mistakes in their reasoning. See: motivated reasoning, “ugh” fields, etc.
Therefore, the hypothetical AI threatening people with torture for… well, pretty much for not being perfectly epistemically and instrumentally rational… would decrease the probability of Friendly AI being built correctly. Therefore, I don’t consider this hypothetical AI to be Friendly.
[removed]
This question is equivalent to: “How about, you know, just building a Friendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
These questions are equivalent in the same sense as “how about just not setting X equal to pi” and “how about just setting X equal to e” are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction.
To the contrary, “just building the [very specific sort of] whole monster” is what’s more equivalent to “just building a [very specific definition of] Friendly AI”, an a priori improbable task.
Worse for the basilisk: at least in the case of Friendly AI you might end up stuck with nothing better to do but throw a dart and hope for a bulls-eye. But in the case of the basilisk, the acausal trade is only rational if you expect a high likelihood of the trade being carried out. But if that likelihood is low then you’re just being nutty, which means it’s unlikely for the other side of the trade to be upheld in any case (acausally trying to influence Omega’s prediction of you may work if Omega is omniscient, but not so well if Omega is irrational). This lowers the likelihood still further… until the only remaining question is simply “what’s the fixed point of “x_{n+1} = x_n/2″?”
Consider my parallel changed to “How about, you know, just not building an Unfriendly AI? Uhm… could the solution to the safe AI problem really be so easy?”
There are many possible Unfriendly AI, and most of them don’t base their decision of torturing you on whether you gave them all your money.
Therefore, you can use your reason to try building a Friendly AI… and either succeed or fail, depending on the complexity of the problem and your ability to solve it.
But not depending on a blackmail.
This is the difference between “you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills”, and “you should build this specific Unfriendly AI, because if you don’t, but someone else does, then it will torture you for an eternity”. In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don’t want.
That’s not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.
Yes, the Basilisk does address how one should act in real life. It says: ‘Don’t build a basilisk, dummy!’. Problem solved.
Newcomb’s problem applies just as much in real life. I could in real life appraoch you claiming to be Omega, or as an agent of the Matrix or whatever. How would you respond?
If you generalize to “acausal trades” (suspect, IMHO, but commonly accepted here), you get Roko’s basilisk. A proper decision theory would advise you not to negotiate with terrorists, so Roko’s basilisk is not a real concern. But honestly the subset of people that really understand decision theory enough to find that out on their own is quite small, hence the ban.
Not unless you believe in gods. Omega is just a locally popular name for “a god”.
Depends on the circumstances, but the set of possible responses includes “nod and slowly back away”, “laugh”, “make a note in the contacts list saying ‘batshit crazy’”, “look for someone to take care of you until the drugs wear off”, etc.
With either disbelief or ridicule,depending on how charitable I’m feeling at that moment.
Then you are inconsistent—if you profess to be a finite utilitarian; are you?
Dat irony tho.
Stylistic complaint: “we”? I don’t think me reading your post means you and I are a “we”. This is a public-facing website, your audience isn’t your club.
As to the actual question, CellBioGuy’s answer is spot-on.
This seems to be a popular topic for non-LW sources to link to us over. So in the interests of publicity, entertainment, reduced x-risk, and marginally less risk of anyone ever being tortured, here is a stronger, yet seemingly more benign alternative: the Gentle Judge.
GJ is more intuitive: Fair punishments for intentional refusals to cooperate with Friendly goals. One simple way to determine fair punishment would be to examine the individual’s own perferences, and see how they would punish another person according to their own value system, given the same set of actions and consequences. However, there could also be a ceiling based on what the group of individuals being considered for punishment would believe is fair (ensuring that unusually vindictive outliers don’t end up being punished excessively).
Like Roko’s Basilisk, GJ depends on people imagining it in sufficient detail and being influenced by that to cooperate. Unlike RB, it is not hyper-morally-counterintuitive, so more people would tend to adopt it (all else equal). This means that more people could e.g. donate to MIRI (if that is the best path to GJ’s goals in their subjective estimation). Moreover, if GJ is more likely to be successful, any worlds with RB would actually punish people who fail to promote and/or implement GJ instead of RB.