I am in the following epistemic situation:
a) I missed, and thus don’t know, BANNED TOPIC
b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here
Out of the members here who share roughly this position, am I the only one who -
having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances—is PLEASED that the topic was censored?
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
Of course, maybe I’m miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.
(David Gerard, I’d be grateful if you could let me know if the above trips any cultishness flags.)
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
I award you +1 sanity point.
(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)
Upvoted both the parent and the grandparent because I was nervous having no clue what was going on, looked at the basilisk, and would rather I hadn’t. I’m not clever/imaginitive enough to be sure why I shouldn’t have done it, but it was still a dumb move. I’m glad the thing was censored and I applaud leonhart for being sensible.
I’m not clever/imaginitive enough that I shouldn’t have done it, if people really shouldn’t do it. On the other hand, if I somehow find out people who have done it are taking drastic actions that would worry me enough to make further investigations, but as far as I can tell I’m probably better off knowing if that’s the case (I think, depending on how altruistic those people are, what EY and the SIAI can actually do, how many worlds/”quantum immortality” work etc)
Quite honestly it’s far less of a worry to me than more mundane friendliness failures.
Though reading this comment and others like it have managed to convince not to seek out the deleted post, I can’t help but think that they would be aided by a reminder of what it means to be Schmuck Bait.
the only information that I know and wish I did not know.
I don’t think it’s quite that extreme. For example, I wish I wasn’t as intelligent as I am, wish I was more normal mentally and had more innate ability at socializing and less at math, wish I didn’t suffer from smart sincere syndrome. I think these are all in roughly the same league as the banned material.
Are you sure it’s the basilisk itself you’d prefer to expunge, rather than some earlier concept without which you would lack the metabolic pathways for self-petrification?
Not really :-) If you keep awareness of the cult attractor and can think of how thinking these things about an idea might trip you up, that’s not a flawless defence but will help your defences against the dark arts.
What inspired you to the phrase “invincible mind fortresses”? I like it. Everyone thinks they live in one, that they’re far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that. (It’s like being smart enough to be quite aware of at least some of your own blithering stupidities.)
(I read the forbidden idea. It appears I’m dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don’t actually find it interesting enough to have remembered the details.)
(I read the forbidden idea. It appears I’m dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don’t actually find it interesting enough to have remembered the details.)
Same here. I think (though no one has given a definitive answer) that there is concern about the general case of the specific hypothetical incident discussed therein, not the specific incident itself.
Hmm. I only read it recently, so maybe I haven’t thought through the general case enough, but I think my solution (assuming it’s not totally absurd) of treating it as though it is really silly with the caveat that if it becomes non-silly I’m not exactly powerless would work for all such cases.
Thank you. I’ve found your comments very useful, not least because when younger I came uncomfortably close to being parted from a reasonable sum of money, by a group who understood the Dark Arts rather well. That was before I read Cialdini, but I’m not sure how well it would have sunk in without the object lesson.
I’m not good at thinking things are silly. That’s great for getting suspension of disbelief and fun out of certain things (for example, I can enjoy JRPG plots :) but it’s also a spot where one can be hit for massive damage.
As for the happy phrasing, I might have been thinking of this. (Warning: 4chan, albeit one of its nicer suburbs.)
Everyone thinks they live in [invincible mind fortresses], that they’re far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that.
Again you tell us. Some people who think that are right. They are NOT “of course” wrong. A random person isn’t guaranteed to be vulnerable, and there are people for which you can say that they are most certainly invincible. That any person is “of course vulnerable” is of course wrong as a point of simple fact.
I would be interested in hearing about your evidence for the existence of people who are “most certainly invincible” to cultishness, as I’m not sure how I would go about testing that.
I think a lot more people are vulnerable than consider themselves vulnerable.
I mainly object to “of course”, and your argument cited here (irrespective of its correctness) doesn’t even try to support it. Please be more careful in what you use, you can’t just throw an arbitrarily picked affective soldier, it has to actually argue for the conclusion it’s supposed to support (i.e. be (inferential) evidence in its favor to an extent that warrants changing the conclusion).
I think a lot more people are vulnerable than consider themselves vulnerable.
I mainly object to “of course”, and the argument I cited here (irrespective of its correctness) doesn’t even try to support it.
I wasn’t making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.
The “of course” is because it’s a cognitive error: people are sure it could never happen to them. I observe them being really quickly, really certain of that when they hear of someone else falling for cultishness—that’s the “of course”. In some cases this will be true, but it’s far from universally true. I don’t know which particular error or combination of errors it is, but it does seem to be a cognitive error. It is true that I do need to work out which ones it is so that I can talk about it without those people who reply “aha, but you haven’t proven right here it’s every single one, aha” and think they’ve added something useful to discussion of the topic.
I see. So they can sometimes be accidentally correct in expecting that they are not vulnerable, as in fact they will not be vulnerable, but their level of certainty in that fact will almost certainly (“of course”) be off in a systematic predictable way. This interpretation works.
I wasn’t making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.
I think of the “talking about the subject in passing” mode as “making errors, because it’s easier that way”, which looks to me as a good argument for making errors, but they are still errors.
It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses
In general, I treat attempts to focus my attention on any particular highly-unlikely-but-really-bad scenario as an invitation to inappropriately privilege the hypothesis, probably a motivated one, and I discount accordingly. So on balance, yeah, you can count me as “playing along” the way you mean it here.
I don’t think my mind-fortress is invincible, and I am perfectly capable of being hurt by stuff on the Internet. I’m also perfectly capable of being hurt by a moving car, and yet I drive to work every morning.
And yes, if the dangerousness of the Dangerous Idea seems more relevant to you in this case than the politics of the community, I think you’re miscalibrated. The odds of a power struggle in a community in which you have transient membership affecting your life negatively are very small, but I’d be astonished if they were anything short of astronomically higher than the odds of the Dangerous Idea itself affecting your life at all.
I also regret contact with the basilisk, but would not say it’s the only information I wish I didn’t know, nor am I entirely sure it was a good idea to censor it.
When it was originally posted I did not take it seriously, it only triggered “severe mental trauma” as others are saying, when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
I do not think the idea holds water, and I feel I owe much of my severe mental trauma to an ongoing anxiety and depression stemming from a host of ordinary factors, isolation chief among them. I would STRONGLY advise everyone in this community to take their mental health more seriously, not so much in terms of basilisks as in terms of being human beings.
This community is, as it stands, ill-equipped to charge forth valiantly into the unknown. It is neurotic at best.
I would also like to apologize for whatever extent I was a player in the early formation of the cauldron of ideas which spawned the basilisk and I’m sure will spawn other basilisks in due time. I participated with a fairly callous abandon in the SL4 threads which prefigure these ideas.
Even at the time it was apparent to anyone paying attention that the general gist of these things was walking a worrisome path, and I basically thought “well, I can see my way clear through these brambles, if other people can’t, that’s their problem.”
We have responsibilities, to ourselves as much as to each other, beyond simply being logical. I have lately been reexamining much of my life, and have taken to practicing meditation. I find it to be a significant aid in combating general anxiety.
...when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?
There is no expected payoff in harming me just because decision theory implies it being rational. Because I do not follow such procedures. If something wants to waste its resources on it, I win. Because I weaken it. It has to waste resources on me it could use in the dark ages of the universe to support a protégé. And it never received any payoff for this, because I do not play along in any branch that I exist. You see, any decision theory is useless if you deal with agents that don’t care about such. Utility is completely subjective too, as Hume said, “`Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger”. The whole problem in question is just due to the fact that people think that if decision theory implies a strategy to be favorable then you have to follow through on it. Well, no. You can always say, fuck you! The might of God and terrorists is in the mind of their victims.
If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?
Are they? Are they really? What actual, concrete actions have been taken, or are planned, regarding the basilisk? If people actually make material sacrifices based on having seen the Basilisk then I’m willing to take it seriously, if only for its effects on the human mind.
Then again in the most worrying (or third most worrying, I guess) case, they would likely hide said activities to prevent anything damaging their plans. They could also hide it out of altruism to keep from disturbing halfway smart basilisk seers like us, I guess.
I saw the original post. I had trouble taking the problem that seriously in the general case. In particular, there seemed to be two obvious problems that arose from the post in question. One was a direct decision theoretic basilisk, the other was a closely associated problem that was empirically causing basilisk-like results to some people who knew about the problem in question. I consider the first problem (the obvious decision-theoretic basilisk) to be extremely unlikely. But since then I’ve talked to at least one person (not Eliezer) who knows a lot more about the idea who has asserted that there are more subtle aspects of the basilisk which could make it or related basilisks more likely. I don’t know if that person has better understanding of decision theory than I do, but he’s certainly thought about these issues a lot more than I do so it did move my estimate that there was a real threat here upwards. But even given that, I still consider the problems to be unlikely. I’m much more concerned about the pseudo-basilisk which empirically has struck some people. The pseudo-basilisk itself might justify the censorship. Overall, I’m unconvinced.
I have read the idea. I am unscathed. It is not difficult to find, if you look.
There is some chance my mind fortress is better defended than other people’s- I am known to be level-headed in situations with and without the presence of imminent physical harm- but I don’t think that applies to this particular circumstance. It felt to me like something you would have to convince yourself to care about- and so for some people that may be easier than it is for others (or automatic).
Hi there, Vaniver. I figured I’d ask you about this, because others seem too disturbed by the idea for me to want to bring it up again. Anyway, I’ve been reading through old threads, and encountered mention of this “basilisk”… and now I’m extremely curious. What was this idea that made so many people uncomfortable?
Edit Update: On the advice of several people, I am leaving this alone for now. If I do go ahead and read it, I’ll edit this post again with my thoughts.
Please abandon this project, for your safety and comfort, that of people you might tell, and that of others who your “benefactor” might be disposed to tell if you succeed in weakening someone’s resolve to keep it safely secret.
Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected. (I’m assuming you’re referring to the possibility of anxiety, etc? I do suffer from anxiety, but I’ve had to learn to deal with fairly horrific things, so I am not easily disturbed any more.) I certainly won’t tell anyone, even if I had someone to tell, and if someone has resolved to keep it secret I doubt they will tell me in the first place.
I’m not too worried about finding out, though; if no one wants to say, I won’t pressure anyone to. That’s why I have asked someone who wasn’t affected: they will surely be able to judge without fear making them irrational. If they still don’t want to say, I’ll just live with being curious.
Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected.
I encourage you to accede to the tribal wishes and not tell anyone about the idea at least within the tribe and the scope of where lesswrong can claim any influence whatsoever (as you’ve already agreed). As you say, you don’t sound like the sort of person who could be harmed by reading it personally so need not be concerned for your own sake.
It seems like it would be easy to predict an individual’s reaction to the thing by looking for correlated reactions between that andsomeotherthings from people who have seen it all, and then seeing how a given innocent reacts to those other things.
I bet some pretty strong patterns would emerge, and we could predict reactions to the thing. I do not think that protecting people from harm now is a true objection, for it could be dealt with by identifying vulnerable people and not making the whole topic such forbidden fruit.
It depends on how strongly you believe in singularity. It is easy to ignore the whole thing as silly (which is essentially what I do), but if you have slightly different priors (or reasoning), it may be harmful.
It depends on how strongly you believe in singularity.
While part of it, that doesn’t appear to be all of it. It seems like it only applies for a narrow range of possible singularities. I keep coming back to visibility bias when I think about this.
It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses
I’m certain that the forbidden topic couldn’t possibly hurt me (probability of that is zilch). Still, I agree that from what we know, considering it should be discouraged, based on an expected utility argument (it either changes nothing or hurts tremendously with tiny probability, but can’t correspondingly help tremendously because human value is a narrow target). Don’t confuse these two arguments.
(I think this is my best summary of the shape of the argument so far.)
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my “being bugged” by the removal.)
The thing that’s been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic… should we as humans really be in the habit of “this seems like dangerous idea, don’t think about it”?
I can’t help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one’s own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...
I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. “Don’t think about deliciously forbidden dangerous idea, just don’t”, even if perhaps actually is indicated in certain very unusual special cases, seems like the sort of thing that one would, as a human, want injunctions against.
Again, I’m of several minds on this however.
(EDIT: Just to clarify, that does not mean that I in any way approve of “existential threat blackmail” or that I’m even of two minds about that. That’s just epically stupid)
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my “being bugged” by the removal.)
Yeah, that was the reason that convinced me its removal from here was a good enough idea to bother enacting. I wouldn’t try removing it from the net, but due warning is appropriate. Such things attract curious monkeys to test the wet paint—but! I still haven’t seen 2 Girls 1 Cup and have no plans to! So it’s not assured.
I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I’m still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.
I don’t regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don’t need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn’t seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn’t be discussed.
I’ve never seen the basilisk (and I have just about resisted the very powerful urge to seek it out), but if one of us came up with a dangerous idea, is it not likely that an AI would do the same. Taking into account the vastly greater possibility of an AI to cause harm if ‘infected’, might we not gain more from looking at the problem now in case we can find a resolution (perhaps a better decision theory) and use that to avert a genuinely catastrophic outcome. Even if our hopes of solving the problem are not high, the probabilities and utilities may still advise it.
Of course, since I haven’t seen it, I might be totally misunderstanding the situation, or maybe there is an excellent reason why the above is wrong that I can’t understand without exposing myself to the basilisk. Even if this isn’t the case, it might still be best for a few people who have already seen it to work on the problem, rather than informing someone like me who probably wouldn’t be much help anyway.
If it’s not too much trouble, could you at least sate my burning curiosity by telling me which of the three options above, if any, is correct.
If you’re still curious, after all these years, and if another data point is still helpful-
I know the information in question, and I anticipate a non-negligible probability of being tortured horribly for knowing it (though presumably an FAI would figure out a way to make everyone think this happened rather than actually doing it), but oddly I am not sure whether I regret knowing it.
I regret that I work in a job which will, at some future point, require me to be one of maybe 2 or 3 people who have to think about this matter in order to confirm whether any damage has probably been done and maximize the chances of repairing the damage after the fact. No one who is not directly working on the exact code of a foomgoing AI has any legitimate reason to think about this, and from my perspective the thoughts involved are not even that interesting or complicated.
The existence of this class of basilisks was obvious to me in 2004-2005 or thereabouts. At the time I did not believe that anyone could possibly be smart enough to see the possibility of such a basilisk and stupid enough to talk about it publicly, or at all for that matter. As a result of this affair I have updated in the direction of “people are genuinely that stupid and that incapable of shutting up”.
This is not a difficult research problem on which I require assistance. This is other people being stupid and me getting stuck cleaning up the mess, in what will be a fairly straightforward fashion if it can be done at all.
I’m curious.
I am in the following epistemic situation: a) I missed, and thus don’t know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here
Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances—is PLEASED that the topic was censored?
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
Of course, maybe I’m miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.
(David Gerard, I’d be grateful if you could let me know if the above trips any cultishness flags.)
I award you +1 sanity point.
(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)
Upvoted both the parent and the grandparent because I was nervous having no clue what was going on, looked at the basilisk, and would rather I hadn’t. I’m not clever/imaginitive enough to be sure why I shouldn’t have done it, but it was still a dumb move. I’m glad the thing was censored and I applaud leonhart for being sensible.
I’m not clever/imaginitive enough that I shouldn’t have done it, if people really shouldn’t do it. On the other hand, if I somehow find out people who have done it are taking drastic actions that would worry me enough to make further investigations, but as far as I can tell I’m probably better off knowing if that’s the case (I think, depending on how altruistic those people are, what EY and the SIAI can actually do, how many worlds/”quantum immortality” work etc) Quite honestly it’s far less of a worry to me than more mundane friendliness failures.
Though reading this comment and others like it have managed to convince not to seek out the deleted post, I can’t help but think that they would be aided by a reminder of what it means to be Schmuck Bait.
[comment deleted]
I don’t think it’s quite that extreme. For example, I wish I wasn’t as intelligent as I am, wish I was more normal mentally and had more innate ability at socializing and less at math, wish I didn’t suffer from smart sincere syndrome. I think these are all in roughly the same league as the banned material.
Why wish for:
and had less innate ability for math?
Why not just with for being better at socializing/communicating?
Are you sure it’s the basilisk itself you’d prefer to expunge, rather than some earlier concept without which you would lack the metabolic pathways for self-petrification?
Not really :-) If you keep awareness of the cult attractor and can think of how thinking these things about an idea might trip you up, that’s not a flawless defence but will help your defences against the dark arts.
What inspired you to the phrase “invincible mind fortresses”? I like it. Everyone thinks they live in one, that they’re far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that. (It’s like being smart enough to be quite aware of at least some of your own blithering stupidities.)
(I read the forbidden idea. It appears I’m dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don’t actually find it interesting enough to have remembered the details.)
Same here. I think (though no one has given a definitive answer) that there is concern about the general case of the specific hypothetical incident discussed therein, not the specific incident itself.
Hmm. I only read it recently, so maybe I haven’t thought through the general case enough, but I think my solution (assuming it’s not totally absurd) of treating it as though it is really silly with the caveat that if it becomes non-silly I’m not exactly powerless would work for all such cases.
Thank you. I’ve found your comments very useful, not least because when younger I came uncomfortably close to being parted from a reasonable sum of money, by a group who understood the Dark Arts rather well. That was before I read Cialdini, but I’m not sure how well it would have sunk in without the object lesson.
I’m not good at thinking things are silly. That’s great for getting suspension of disbelief and fun out of certain things (for example, I can enjoy JRPG plots :) but it’s also a spot where one can be hit for massive damage.
As for the happy phrasing, I might have been thinking of this. (Warning: 4chan, albeit one of its nicer suburbs.)
Again you tell us. Some people who think that are right. They are NOT “of course” wrong. A random person isn’t guaranteed to be vulnerable, and there are people for which you can say that they are most certainly invincible. That any person is “of course vulnerable” is of course wrong as a point of simple fact.
I would be interested in hearing about your evidence for the existence of people who are “most certainly invincible” to cultishness, as I’m not sure how I would go about testing that.
I think a lot more people are vulnerable than consider themselves vulnerable. You can substitute “most” for “all” if you like.
I mainly object to “of course”, and your argument cited here (irrespective of its correctness) doesn’t even try to support it. Please be more careful in what you use, you can’t just throw an arbitrarily picked affective soldier, it has to actually argue for the conclusion it’s supposed to support (i.e. be (inferential) evidence in its favor to an extent that warrants changing the conclusion).
I wasn’t making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.
The “of course” is because it’s a cognitive error: people are sure it could never happen to them. I observe them being really quickly, really certain of that when they hear of someone else falling for cultishness—that’s the “of course”. In some cases this will be true, but it’s far from universally true. I don’t know which particular error or combination of errors it is, but it does seem to be a cognitive error. It is true that I do need to work out which ones it is so that I can talk about it without those people who reply “aha, but you haven’t proven right here it’s every single one, aha” and think they’ve added something useful to discussion of the topic.
I see. So they can sometimes be accidentally correct in expecting that they are not vulnerable, as in fact they will not be vulnerable, but their level of certainty in that fact will almost certainly (“of course”) be off in a systematic predictable way. This interpretation works.
I think of the “talking about the subject in passing” mode as “making errors, because it’s easier that way”, which looks to me as a good argument for making errors, but they are still errors.
In general, I treat attempts to focus my attention on any particular highly-unlikely-but-really-bad scenario as an invitation to inappropriately privilege the hypothesis, probably a motivated one, and I discount accordingly. So on balance, yeah, you can count me as “playing along” the way you mean it here.
I don’t think my mind-fortress is invincible, and I am perfectly capable of being hurt by stuff on the Internet. I’m also perfectly capable of being hurt by a moving car, and yet I drive to work every morning.
And yes, if the dangerousness of the Dangerous Idea seems more relevant to you in this case than the politics of the community, I think you’re miscalibrated. The odds of a power struggle in a community in which you have transient membership affecting your life negatively are very small, but I’d be astonished if they were anything short of astronomically higher than the odds of the Dangerous Idea itself affecting your life at all.
I also regret contact with the basilisk, but would not say it’s the only information I wish I didn’t know, nor am I entirely sure it was a good idea to censor it.
When it was originally posted I did not take it seriously, it only triggered “severe mental trauma” as others are saying, when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
I do not think the idea holds water, and I feel I owe much of my severe mental trauma to an ongoing anxiety and depression stemming from a host of ordinary factors, isolation chief among them. I would STRONGLY advise everyone in this community to take their mental health more seriously, not so much in terms of basilisks as in terms of being human beings.
This community is, as it stands, ill-equipped to charge forth valiantly into the unknown. It is neurotic at best.
I would also like to apologize for whatever extent I was a player in the early formation of the cauldron of ideas which spawned the basilisk and I’m sure will spawn other basilisks in due time. I participated with a fairly callous abandon in the SL4 threads which prefigure these ideas.
Even at the time it was apparent to anyone paying attention that the general gist of these things was walking a worrisome path, and I basically thought “well, I can see my way clear through these brambles, if other people can’t, that’s their problem.”
We have responsibilities, to ourselves as much as to each other, beyond simply being logical. I have lately been reexamining much of my life, and have taken to practicing meditation. I find it to be a significant aid in combating general anxiety.
Also helpful: clonazepam.
If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?
There is no expected payoff in harming me just because decision theory implies it being rational. Because I do not follow such procedures. If something wants to waste its resources on it, I win. Because I weaken it. It has to waste resources on me it could use in the dark ages of the universe to support a protégé. And it never received any payoff for this, because I do not play along in any branch that I exist. You see, any decision theory is useless if you deal with agents that don’t care about such. Utility is completely subjective too, as Hume said, “`Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger”. The whole problem in question is just due to the fact that people think that if decision theory implies a strategy to be favorable then you have to follow through on it. Well, no. You can always say, fuck you! The might of God and terrorists is in the mind of their victims.
Are they? Are they really? What actual, concrete actions have been taken, or are planned, regarding the basilisk? If people actually make material sacrifices based on having seen the Basilisk then I’m willing to take it seriously, if only for its effects on the human mind. Then again in the most worrying (or third most worrying, I guess) case, they would likely hide said activities to prevent anything damaging their plans. They could also hide it out of altruism to keep from disturbing halfway smart basilisk seers like us, I guess.
I’m pretty sure no one firmly believes in the basilisk simply because everyone who was convinced by it would be spreading it as much as they could.
I saw the original post. I had trouble taking the problem that seriously in the general case. In particular, there seemed to be two obvious problems that arose from the post in question. One was a direct decision theoretic basilisk, the other was a closely associated problem that was empirically causing basilisk-like results to some people who knew about the problem in question. I consider the first problem (the obvious decision-theoretic basilisk) to be extremely unlikely. But since then I’ve talked to at least one person (not Eliezer) who knows a lot more about the idea who has asserted that there are more subtle aspects of the basilisk which could make it or related basilisks more likely. I don’t know if that person has better understanding of decision theory than I do, but he’s certainly thought about these issues a lot more than I do so it did move my estimate that there was a real threat here upwards. But even given that, I still consider the problems to be unlikely. I’m much more concerned about the pseudo-basilisk which empirically has struck some people. The pseudo-basilisk itself might justify the censorship. Overall, I’m unconvinced.
I agree.
Like Alicorn, this is the only thing I know that I wish I did not know.
On the plus side, it made me realise my utility function is not monotonic in knowledge.
I have read the idea. I am unscathed. It is not difficult to find, if you look.
There is some chance my mind fortress is better defended than other people’s- I am known to be level-headed in situations with and without the presence of imminent physical harm- but I don’t think that applies to this particular circumstance. It felt to me like something you would have to convince yourself to care about- and so for some people that may be easier than it is for others (or automatic).
Hi there, Vaniver. I figured I’d ask you about this, because others seem too disturbed by the idea for me to want to bring it up again. Anyway, I’ve been reading through old threads, and encountered mention of this “basilisk”… and now I’m extremely curious. What was this idea that made so many people uncomfortable?
Edit Update: On the advice of several people, I am leaving this alone for now. If I do go ahead and read it, I’ll edit this post again with my thoughts.
Please abandon this project, for your safety and comfort, that of people you might tell, and that of others who your “benefactor” might be disposed to tell if you succeed in weakening someone’s resolve to keep it safely secret.
Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected. (I’m assuming you’re referring to the possibility of anxiety, etc? I do suffer from anxiety, but I’ve had to learn to deal with fairly horrific things, so I am not easily disturbed any more.) I certainly won’t tell anyone, even if I had someone to tell, and if someone has resolved to keep it secret I doubt they will tell me in the first place.
I’m not too worried about finding out, though; if no one wants to say, I won’t pressure anyone to. That’s why I have asked someone who wasn’t affected: they will surely be able to judge without fear making them irrational. If they still don’t want to say, I’ll just live with being curious.
I encourage you to accede to the tribal wishes and not tell anyone about the idea at least within the tribe and the scope of where lesswrong can claim any influence whatsoever (as you’ve already agreed). As you say, you don’t sound like the sort of person who could be harmed by reading it personally so need not be concerned for your own sake.
It seems like it would be easy to predict an individual’s reaction to the thing by looking for correlated reactions between that and some other things from people who have seen it all, and then seeing how a given innocent reacts to those other things.
I bet some pretty strong patterns would emerge, and we could predict reactions to the thing. I do not think that protecting people from harm now is a true objection, for it could be dealt with by identifying vulnerable people and not making the whole topic such forbidden fruit.
It depends on how strongly you believe in singularity. It is easy to ignore the whole thing as silly (which is essentially what I do), but if you have slightly different priors (or reasoning), it may be harmful.
While part of it, that doesn’t appear to be all of it. It seems like it only applies for a narrow range of possible singularities. I keep coming back to visibility bias when I think about this.
I too regret knowing the idea.
I’m certain that the forbidden topic couldn’t possibly hurt me (probability of that is zilch). Still, I agree that from what we know, considering it should be discouraged, based on an expected utility argument (it either changes nothing or hurts tremendously with tiny probability, but can’t correspondingly help tremendously because human value is a narrow target). Don’t confuse these two arguments.
(I think this is my best summary of the shape of the argument so far.)
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my “being bugged” by the removal.)
The thing that’s been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic… should we as humans really be in the habit of “this seems like dangerous idea, don’t think about it”?
I can’t help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one’s own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...
I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. “Don’t think about deliciously forbidden dangerous idea, just don’t”, even if perhaps actually is indicated in certain very unusual special cases, seems like the sort of thing that one would, as a human, want injunctions against.
Again, I’m of several minds on this however.
(EDIT: Just to clarify, that does not mean that I in any way approve of “existential threat blackmail” or that I’m even of two minds about that. That’s just epically stupid)
Yeah, that was the reason that convinced me its removal from here was a good enough idea to bother enacting. I wouldn’t try removing it from the net, but due warning is appropriate. Such things attract curious monkeys to test the wet paint—but! I still haven’t seen 2 Girls 1 Cup and have no plans to! So it’s not assured.
I’ve seen it. It’s not really as interesting as the hype would suggest.
I feel the same as you, even though I know what the banned topic was. I haven’t thought about it too deeply, because, well, duh.
I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I’m still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.
I don’t regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don’t need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn’t seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn’t be discussed.
I’ve never seen the basilisk (and I have just about resisted the very powerful urge to seek it out), but if one of us came up with a dangerous idea, is it not likely that an AI would do the same. Taking into account the vastly greater possibility of an AI to cause harm if ‘infected’, might we not gain more from looking at the problem now in case we can find a resolution (perhaps a better decision theory) and use that to avert a genuinely catastrophic outcome. Even if our hopes of solving the problem are not high, the probabilities and utilities may still advise it.
Of course, since I haven’t seen it, I might be totally misunderstanding the situation, or maybe there is an excellent reason why the above is wrong that I can’t understand without exposing myself to the basilisk. Even if this isn’t the case, it might still be best for a few people who have already seen it to work on the problem, rather than informing someone like me who probably wouldn’t be much help anyway.
If it’s not too much trouble, could you at least sate my burning curiosity by telling me which of the three options above, if any, is correct.
You’re totally misunderstanding the situation.
Thanks.
If you’re still curious, after all these years, and if another data point is still helpful-
I know the information in question, and I anticipate a non-negligible probability of being tortured horribly for knowing it (though presumably an FAI would figure out a way to make everyone think this happened rather than actually doing it), but oddly I am not sure whether I regret knowing it.
Aw, look, it’s someone sane.
Hi Eliezer. It took me way too long to figure out the right question to ask about this mess, but here it is: do you regret knowing about the basilisk?
I regret that I work in a job which will, at some future point, require me to be one of maybe 2 or 3 people who have to think about this matter in order to confirm whether any damage has probably been done and maximize the chances of repairing the damage after the fact. No one who is not directly working on the exact code of a foomgoing AI has any legitimate reason to think about this, and from my perspective the thoughts involved are not even that interesting or complicated.
The existence of this class of basilisks was obvious to me in 2004-2005 or thereabouts. At the time I did not believe that anyone could possibly be smart enough to see the possibility of such a basilisk and stupid enough to talk about it publicly, or at all for that matter. As a result of this affair I have updated in the direction of “people are genuinely that stupid and that incapable of shutting up”.
This is not a difficult research problem on which I require assistance. This is other people being stupid and me getting stuck cleaning up the mess, in what will be a fairly straightforward fashion if it can be done at all.