On the other hand, does not banning these debates contribute to having less of them? Doesn’t seem so. We already had a dozen of them, and we are going to have more, and more, and more...
I can’t know what happened in the parallel Everett branch where Eliezer didn’t delete that comment… but I wouldn’t be too surprised if the exposure of the basilisk was pretty much the same—without the complaining about censorship, but perhaps with more of “this is what people on LW actually believe, here is the link to prove it”.
I think this topic is debated mostly because it’s the clever contrarian thing to do. You have a website dedicated to rationality and artificial intelligence where people claim to care about humanity? Then you get contrarian points for inventing clever scenarios of how using rationality will actually make things go horribly wrong. It’s too much fun to resist. (Please note that this motivated inventing of clever horror scenarios is different from predicting actual risks. Finding actual risks is a useful thing to do. Inventing dangers that exist only because you invented and popularized them, not very useful.)
On the other hand, does not banning these debates contribute to having less of them?
The debates are not technically, banned, but there are still strict limits on what we’re allowed to say. We cannot, for instance, have an actual discussion about why the basilisk wouldn’t work.
Furthermore, there are aspects other than the ban that make LW look bad. Just the fact that people fall for the basilisk makes LW look bad all by itself. You could argue that the people who fall for the basilisk are mentally unstable, but having too many mentally unstable people or being too willing to limit normal people for the sake of mentally unstable people makes us look bad too. Ultimately, the problem is that “looking bad” happens because there are aspects of LW that people consider to be bad. It’s not just a public relations problem—the basilisk demonstrates a lack of rationality on LW and the only way to fix the bad perception is to fix the lack of rationality.
One of the problems is that the basilisk is very weird, but the prerequisites—which are mostly straight out of the Sequences—are also individually weird. So explaining the basilisk to people who haven’t read the Sequences through a few times and haven’t been reading LessWrong for years is … a bit of work.
Presumably, you don’t believe the basilisk would work.
If you don’t believe the basilisk would work, then it really doesn’t matter all that much that people don’t understand the prerequisites. After all, even understanding the prerequisites won’t change their opinion of whether the basilisk is correct. (I suppose that understanding the sequences may change the degree of incorrectness—going from crazy and illogical to just normally illogical—but I’ve yet to see anyone argue this.)
Are you saying it’s meaningless to tell someone about the prerequisites—which, as I note, are pretty much straight out of the Sequences—unless they think the basilisk would work?
It’s not meaningless in general, but it’s meaningless for the purpose of deciding that they shouldn’t see the basilisk because they’d misunderstand it. They don’t misunderstand it—they know that it’s false, and if they read the sequences they’d still know that it’s false.
As I pointed out, you could still argue that they’d misunderstand the degree to which the basilisk is false, but I’ve yet to see anyone argue that.
On the other hand, does not banning these debates contribute to having less of them? Doesn’t seem so. We already had a dozen of them, and we are going to have more, and more, and more...
I can’t know what happened in the parallel Everett branch where Eliezer didn’t delete that comment… but I wouldn’t be too surprised if the exposure of the basilisk was pretty much the same—without the complaining about censorship, but perhaps with more of “this is what people on LW actually believe, here is the link to prove it”.
I think this topic is debated mostly because it’s the clever contrarian thing to do. You have a website dedicated to rationality and artificial intelligence where people claim to care about humanity? Then you get contrarian points for inventing clever scenarios of how using rationality will actually make things go horribly wrong. It’s too much fun to resist. (Please note that this motivated inventing of clever horror scenarios is different from predicting actual risks. Finding actual risks is a useful thing to do. Inventing dangers that exist only because you invented and popularized them, not very useful.)
The debates are not technically, banned, but there are still strict limits on what we’re allowed to say. We cannot, for instance, have an actual discussion about why the basilisk wouldn’t work.
Furthermore, there are aspects other than the ban that make LW look bad. Just the fact that people fall for the basilisk makes LW look bad all by itself. You could argue that the people who fall for the basilisk are mentally unstable, but having too many mentally unstable people or being too willing to limit normal people for the sake of mentally unstable people makes us look bad too. Ultimately, the problem is that “looking bad” happens because there are aspects of LW that people consider to be bad. It’s not just a public relations problem—the basilisk demonstrates a lack of rationality on LW and the only way to fix the bad perception is to fix the lack of rationality.
One of the problems is that the basilisk is very weird, but the prerequisites—which are mostly straight out of the Sequences—are also individually weird. So explaining the basilisk to people who haven’t read the Sequences through a few times and haven’t been reading LessWrong for years is … a bit of work.
Presumably, you don’t believe the basilisk would work.
If you don’t believe the basilisk would work, then it really doesn’t matter all that much that people don’t understand the prerequisites. After all, even understanding the prerequisites won’t change their opinion of whether the basilisk is correct. (I suppose that understanding the sequences may change the degree of incorrectness—going from crazy and illogical to just normally illogical—but I’ve yet to see anyone argue this.)
Are you saying it’s meaningless to tell someone about the prerequisites—which, as I note, are pretty much straight out of the Sequences—unless they think the basilisk would work?
It’s not meaningless in general, but it’s meaningless for the purpose of deciding that they shouldn’t see the basilisk because they’d misunderstand it. They don’t misunderstand it—they know that it’s false, and if they read the sequences they’d still know that it’s false.
As I pointed out, you could still argue that they’d misunderstand the degree to which the basilisk is false, but I’ve yet to see anyone argue that.