This is mainly what I want to know. From the comments on this post, it looks like W_N claims to have (read: geniunely has, geniunely thinks he has, or trolls as though he has) come across something he can’t tell people about—a basilisk, some conspiracy-theory-type information, something. Being a relative newcomer unwilling to go through large numbers of his previous posts, I’d like to know if anyone who’s seen him longer has any more information.
Also, this whole thing is absolutely hilarious to read.
1) It’s a “basilisk”, i.e. an imaginary lovecraftian threat that doesn’t even make sense outside of some highly particular and probably wrong belief system. (That’s not my definition of basilisk, but it is what I think of such claims.)
2) Some mundane fact about the difficulty or danger of actually trying to save the world (in the specific sense of shaping a singularity) has made his blood run cold. It could be the existence in the real world of powerful evil cliques; it could be the psychological potential of joining them, or just of becoming selfish in an ordinary sense.
3) I remember when I was 22 and realized (according to the plans I had at the time) that it might take me eight years to save the world. That was very daunting, because at the time it looked like it would be a joyless, stressful, solitary existence, for an unimaginably long period of time. And as it turned out, I didn’t even get it done… Will could be fleeing the responsibilities of his “position”—I mean his existential position, which surely includes the perception that he has the potential to make a difference in a huge way.
ETA 4) He wants to create a barrier (a “credibility barrier”) between himself and his former associates in SI, so as to develop his own thinking, because there’s a systematic deficiency in their outlook and he must avoid the temptation of working within that paradigm.
It could be the existence in the real world of powerful evil cliques; it could be the psychological potential of joining them, or just of becoming selfish in an ordinary sense.
Right...that would be bad. But I doubt it. They are too crazy for that, just like all the other extremists. And besides, they are not even able to protect themselves from theft, even though they are a relatively small group. Still, damage can be done even by crazies. I just hope one of them will whistle-blow any plans before damage can be done.
This is mainly what I want to know. From the comments on this post, it looks like W_N claims to have (read: geniunely has, geniunely thinks he has, or trolls as though he has) come across something he can’t tell people about—a basilisk, some conspiracy-theory-type information, something.
Would you have written the same comment if the header of this site wouldn’t read “a community blog devoted to refining the art of human rationality” but instead “computational theology”?
No, but that’s a fallacious comparison. The header does in fact read “a community blog devoted to refining the art of human rationality,” and I’m here because I want to read that kind of site.
Also, I’ve read some of Will’s “computational theology” blog. His posts there seem to consist of actual reasoning and logic and such, whereas over here his posts on the same general topic tend toward “I’ve got a big secret I’m not going to tell you, so there, nyaah.” (My apologies if this is an unfair representation, but that’s the impression I’ve formed.)
I mean, I get why Newsome would want to obscure this: a lot of people get off on being seen as “mysterious” or whatever. But there does seem to be a number of people here who understand what is going on, but are refusing to offer their explanations, in spite of the fact that a lot of people are confused here.
Maybe they take the basilisk threat seriously? That would be crazy/sad if true.
Edit: Also, there are now a number of people openly asking for explanations, but all we are getting is speculation from people who also don’t know what is going on. I’m starting to get annoyed with this.
But there does seem to be a number of people here who understand what is going on, but are refusing to offer their explanations, in spite of the fact that a lot of people are confused here.
Maybe they take the basilisk threat seriously?
Just don’t be fooled by intelligence too much. Just because those people can disgorge some math that doesn’t lent their extraordinary claims much credence. Most of the credence they assign is based on mutual reassurance anyway. Just like a bunch of ufologists updating on each others evidence of alien abductions.
Just like a bunch of ufologists updating on each others evidence of alien abductions.
Given normal assumptions, additional claims of abductions should provide additional evidence. I don’t think you’ve quite pinned downed the error with your example.
There is no big scary secret. The only danger to worry about is that this community of schizo scifi nerds is going to have some perceptible and negative influence by spreading and popularizing their bullshit. Which will mainly be a problem for the computer science community, especially AI research, since those people are naturally susceptible to such infections.
But I am not too worried about that either. If all people who buy all this bullshit stop working on AI then maybe that will renovate the field and actually allow some real progress to take place by giving new ideas a chance and by introducing new perspectives which are less deluded by science fictional ideas. In a sense lesswrong/SIAI might function as a crackpot attractor, stripping out all negative elements so that actual progress can take place.
The only danger to worry about is that this community of schizo scifi nerds is going to
Alicorn, if a “should this be moderated” poll is required anywhere in this thread this is the kind of trolling that needs to be targeted. Way across the line.
The quoted sentence refers to the weapon, not the event from which it’s shaped (through misrepresentation or motivated misinterpretation). Even community voting that hides comments that happen to be critical is being used as fuel for accusations of censorship.
I suggest that if this is applicable to any behavior in this thread it is to the actual trolling, not Will just being a crackpot.
A fraction of my comments are outright critical and I am only posting a few comments per week. There have been dozens of highly critical comments lately not made by me. Some of them containing direct personal attacks.
If you really perceive the few harsh comments that I make, that reflect a widely held opinion, to be too much then you lost all touch with reality and require much more criticism than I could deliver.
Wait a few more years and the shitstorm is going to increase by orders of magnitude and I won’t even be part of it.
Do you really believe that you can get away with your attitude? Be prepared to be surprised.
And stop calling everything “trolling”. It’s really getting boring.
Alicorn, if a “should this be moderated” poll is required anywhere in this thread this is the kind of trolling that needs to be targeted. Way across the line.
What is way across the line is when people start asking about “secretes” and basilisks and there is any chance of such possibilities being taken seriously. What is way across the line is when an organisations tries to actively impede research.
Some harsh words are completely appropriate then.
I have no problem with Will Newsome and find a lot of his output enjoyable. But if he starts to lend credibility to crazy shit like basilisks in the minds of people then that has to be said.
I endorse you not worrying about SI impeding AI progress on any significant scale. I would also endorse, if you’re genuinely interested in encouraging AI research, devoting more of your attention to the problems that are actually impeding AI progress.
What’s the big scary secret?
This is mainly what I want to know. From the comments on this post, it looks like W_N claims to have (read: geniunely has, geniunely thinks he has, or trolls as though he has) come across something he can’t tell people about—a basilisk, some conspiracy-theory-type information, something. Being a relative newcomer unwilling to go through large numbers of his previous posts, I’d like to know if anyone who’s seen him longer has any more information.
Also, this whole thing is absolutely hilarious to read.
I have a few ideas:
1) It’s a “basilisk”, i.e. an imaginary lovecraftian threat that doesn’t even make sense outside of some highly particular and probably wrong belief system. (That’s not my definition of basilisk, but it is what I think of such claims.)
2) Some mundane fact about the difficulty or danger of actually trying to save the world (in the specific sense of shaping a singularity) has made his blood run cold. It could be the existence in the real world of powerful evil cliques; it could be the psychological potential of joining them, or just of becoming selfish in an ordinary sense.
3) I remember when I was 22 and realized (according to the plans I had at the time) that it might take me eight years to save the world. That was very daunting, because at the time it looked like it would be a joyless, stressful, solitary existence, for an unimaginably long period of time. And as it turned out, I didn’t even get it done… Will could be fleeing the responsibilities of his “position”—I mean his existential position, which surely includes the perception that he has the potential to make a difference in a huge way.
ETA 4) He wants to create a barrier (a “credibility barrier”) between himself and his former associates in SI, so as to develop his own thinking, because there’s a systematic deficiency in their outlook and he must avoid the temptation of working within that paradigm.
Right...that would be bad. But I doubt it. They are too crazy for that, just like all the other extremists. And besides, they are not even able to protect themselves from theft, even though they are a relatively small group. Still, damage can be done even by crazies. I just hope one of them will whistle-blow any plans before damage can be done.
Would you have written the same comment if the header of this site wouldn’t read “a community blog devoted to refining the art of human rationality” but instead “computational theology”?
No, but that’s a fallacious comparison. The header does in fact read “a community blog devoted to refining the art of human rationality,” and I’m here because I want to read that kind of site.
Also, I’ve read some of Will’s “computational theology” blog. His posts there seem to consist of actual reasoning and logic and such, whereas over here his posts on the same general topic tend toward “I’ve got a big secret I’m not going to tell you, so there, nyaah.” (My apologies if this is an unfair representation, but that’s the impression I’ve formed.)
I mean, I get why Newsome would want to obscure this: a lot of people get off on being seen as “mysterious” or whatever. But there does seem to be a number of people here who understand what is going on, but are refusing to offer their explanations, in spite of the fact that a lot of people are confused here.
Maybe they take the basilisk threat seriously? That would be crazy/sad if true.
Edit: Also, there are now a number of people openly asking for explanations, but all we are getting is speculation from people who also don’t know what is going on. I’m starting to get annoyed with this.
Just don’t be fooled by intelligence too much. Just because those people can disgorge some math that doesn’t lent their extraordinary claims much credence. Most of the credence they assign is based on mutual reassurance anyway. Just like a bunch of ufologists updating on each others evidence of alien abductions.
Given normal assumptions, additional claims of abductions should provide additional evidence. I don’t think you’ve quite pinned downed the error with your example.
There is no big scary secret. The only danger to worry about is that this community of schizo scifi nerds is going to have some perceptible and negative influence by spreading and popularizing their bullshit. Which will mainly be a problem for the computer science community, especially AI research, since those people are naturally susceptible to such infections.
But I am not too worried about that either. If all people who buy all this bullshit stop working on AI then maybe that will renovate the field and actually allow some real progress to take place by giving new ideas a chance and by introducing new perspectives which are less deluded by science fictional ideas. In a sense lesswrong/SIAI might function as a crackpot attractor, stripping out all negative elements so that actual progress can take place.
Alicorn, if a “should this be moderated” poll is required anywhere in this thread this is the kind of trolling that needs to be targeted. Way across the line.
It’s rare enough that handing people with those sentiments the weapon of “any dissent is immediately silenced” is worse than the disease.
Not remotely suggested. Since when does “immediately” mean “after a spiraling trend over a couple of years”?
I suggest that if this is applicable to any behavior in this thread it is to the actual trolling, not Will just being a crackpot.
The quoted sentence refers to the weapon, not the event from which it’s shaped (through misrepresentation or motivated misinterpretation). Even community voting that hides comments that happen to be critical is being used as fuel for accusations of censorship.
A fraction of my comments are outright critical and I am only posting a few comments per week. There have been dozens of highly critical comments lately not made by me. Some of them containing direct personal attacks.
If you really perceive the few harsh comments that I make, that reflect a widely held opinion, to be too much then you lost all touch with reality and require much more criticism than I could deliver.
Wait a few more years and the shitstorm is going to increase by orders of magnitude and I won’t even be part of it.
Do you really believe that you can get away with your attitude? Be prepared to be surprised.
And stop calling everything “trolling”. It’s really getting boring.
What is way across the line is when people start asking about “secretes” and basilisks and there is any chance of such possibilities being taken seriously. What is way across the line is when an organisations tries to actively impede research.
Some harsh words are completely appropriate then.
I have no problem with Will Newsome and find a lot of his output enjoyable. But if he starts to lend credibility to crazy shit like basilisks in the minds of people then that has to be said.
I endorse you not worrying about SI impeding AI progress on any significant scale.
I would also endorse, if you’re genuinely interested in encouraging AI research, devoting more of your attention to the problems that are actually impeding AI progress.