making the site stagnant, reinforcing one narrow worldview without ever acknowledging it.
We are trying to enforce certain norms of reasoning (rationality, trying to draw a map that better reflects the territory) that go against the human nature (cognitive biases, appeals to social instincts, clever rhetoric). From many perspective, this alone is already a narrow worldview.
When I have the time and patience, I try to explain my downvotes. But sometimes I don’t, and it still feels important to me to downvote the obvious violations of the local norms.
I have no idea what the “create new account” dialog looks like these days, but perhaps it should include an explicit warning that you are supposed to read the Sequences and post accordingly… or get downvoted. It is okay to disagree (sometimes is even a way to get many upvotes). It is not okay to just ignore the norms because you are not even aware of them.
Web discussions without downvotes typically end up with some people saying the same stupid things all the time, because nothing can stop them other than a moderator action, which is more extreme response than a downvote.
EDIT:
I have read your posts (and downvoted one of them), and… sigh. You seem to be yet another Buddhist coming here to preach. You don’t explain, you don’t provide evidence, you just declare you beliefs as a fact. You either haven’t read the Sequences, or you completely missed the point. This is exactly what downvotes are for.
That puts your complaint about “stagnant, narrow worldview” into a completely different light.
Even the Mormons give you some help before pointing you to their book.
I wish we had the same manpower Mormons do, then we could afford to give a more detailed feedback to everyone. To win verbal debates, it would also help to have a set of frequently practiced short replies to most common arguments, but winning verbal debates is not why we are here, and explaining concepts takes more time.
Are you a sect using “rationality” as a free pass?
Maybe. (I mean, if I said “no”, would you believe me?) But this is probably the website most obsessed with rationality on the entire internet. Although that’s mostly because most other websites don’t care at all. This website started over a decade ago as a blog focusing on artificial intelligence and human rationality; the idea was roughly that we need to think more clearly in order to build machines smarter than us and survive doing that. As opposed to the alternative strategies, such as “hoping that everything will magically turn out to be alright” or “choosing your favorite keyword and insisting that this is the key to making everything alright”. (Looking at your posts written so far, your favorite keyword seems to be “non-dual language”.)
I will try to give you a more specific feedback, but the mistakes are subtle; I am not sure whether this explanation may help. I think the largest mistake is that you already start with jumping to conclusions without justifying them. You come here saying (I am paraphrasing here) “obviously, speaking the kind of language that makes a difference between the one who is speaking and the rest of the environment is a mistake, actually it is the mistake, and if you abandon this kind of language, it will dramatically increase AI safety”. Which is an interesting idea, if you can provide evidence for it. But you just seem to take it for granted, and expect the audience to play along.
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
...these are the basic objections I got after five minutes of thinking about the topic. But generally, going in the direction of Sapir-Whorf hypothesis is a red flag.
(Relevant chapters from the Sequences that could have prevented you from making this mistake: Applause Lights—about being enthusiastic about a concept withing having a detailed plan about how the concept would apply to the situation at hand; Detached Lever Fallacy—about why it is a mistake to assume that if doing something produces a certain reaction at humans, it will necessarily produce the same reaction at a machine.)
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.
The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!
We are trying to enforce certain norms of reasoning (rationality, trying to draw a map that better reflects the territory) that go against the human nature (cognitive biases, appeals to social instincts, clever rhetoric). From many perspective, this alone is already a narrow worldview.
When I have the time and patience, I try to explain my downvotes. But sometimes I don’t, and it still feels important to me to downvote the obvious violations of the local norms.
I have no idea what the “create new account” dialog looks like these days, but perhaps it should include an explicit warning that you are supposed to read the Sequences and post accordingly… or get downvoted. It is okay to disagree (sometimes is even a way to get many upvotes). It is not okay to just ignore the norms because you are not even aware of them.
Web discussions without downvotes typically end up with some people saying the same stupid things all the time, because nothing can stop them other than a moderator action, which is more extreme response than a downvote.
EDIT:
I have read your posts (and downvoted one of them), and… sigh. You seem to be yet another Buddhist coming here to preach. You don’t explain, you don’t provide evidence, you just declare you beliefs as a fact. You either haven’t read the Sequences, or you completely missed the point. This is exactly what downvotes are for.
That puts your complaint about “stagnant, narrow worldview” into a completely different light.
Not really talking about Buddhism—just the tags—and those are probably my lasts posts on it. I’m working on an AI safety paper now.
Appreciated.
I wish we had the same manpower Mormons do, then we could afford to give a more detailed feedback to everyone. To win verbal debates, it would also help to have a set of frequently practiced short replies to most common arguments, but winning verbal debates is not why we are here, and explaining concepts takes more time.
Maybe. (I mean, if I said “no”, would you believe me?) But this is probably the website most obsessed with rationality on the entire internet. Although that’s mostly because most other websites don’t care at all. This website started over a decade ago as a blog focusing on artificial intelligence and human rationality; the idea was roughly that we need to think more clearly in order to build machines smarter than us and survive doing that. As opposed to the alternative strategies, such as “hoping that everything will magically turn out to be alright” or “choosing your favorite keyword and insisting that this is the key to making everything alright”. (Looking at your posts written so far, your favorite keyword seems to be “non-dual language”.)
I will try to give you a more specific feedback, but the mistakes are subtle; I am not sure whether this explanation may help. I think the largest mistake is that you already start with jumping to conclusions without justifying them. You come here saying (I am paraphrasing here) “obviously, speaking the kind of language that makes a difference between the one who is speaking and the rest of the environment is a mistake, actually it is the mistake, and if you abandon this kind of language, it will dramatically increase AI safety”. Which is an interesting idea, if you can provide evidence for it. But you just seem to take it for granted, and expect the audience to play along.
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
...these are the basic objections I got after five minutes of thinking about the topic. But generally, going in the direction of Sapir-Whorf hypothesis is a red flag.
(Relevant chapters from the Sequences that could have prevented you from making this mistake: Applause Lights—about being enthusiastic about a concept withing having a detailed plan about how the concept would apply to the situation at hand; Detached Lever Fallacy—about why it is a mistake to assume that if doing something produces a certain reaction at humans, it will necessarily produce the same reaction at a machine.)
Now to answer some of the questions:
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.
The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!