My first impression of LessWrong is that allowing downvotes (and even upvotes) without any explanation risks making the site stagnant, reinforcing one narrow worldview without ever acknowledging it.
I can relate to the feeling. Whenever something I posted got downvoted without comment, I wondered about the reasons. Without comment, what can the poster learn from the downvotes? It feels like being sent away. Which it might. But that’s how a community maintains its standards—for better or worse. I think you point out the ”...or worse.” I think it is a risk maybe worth taking. The alternative is Well-Kept Gardens Die By Pacifism.
Thanks. I think that oftentimes when I downvote without giving a reason, it feels like backstabbing. So I try to put it into words, and then I realise that I might just be biased and end up cancelling the downvote.
It could also be the case that you either die by pacifism or by stagnation. Nothing lasts, so maybe it’s just about choosing how you want to die at a particular moment. Given our current high-stakes times, it might be wise to reflect on how you want to face that. I’m glad that a lot of AI safety research is happening here, and not only in the (much more) walled gardens of academia.
making the site stagnant, reinforcing one narrow worldview without ever acknowledging it.
We are trying to enforce certain norms of reasoning (rationality, trying to draw a map that better reflects the territory) that go against the human nature (cognitive biases, appeals to social instincts, clever rhetoric). From many perspective, this alone is already a narrow worldview.
When I have the time and patience, I try to explain my downvotes. But sometimes I don’t, and it still feels important to me to downvote the obvious violations of the local norms.
I have no idea what the “create new account” dialog looks like these days, but perhaps it should include an explicit warning that you are supposed to read the Sequences and post accordingly… or get downvoted. It is okay to disagree (sometimes is even a way to get many upvotes). It is not okay to just ignore the norms because you are not even aware of them.
Web discussions without downvotes typically end up with some people saying the same stupid things all the time, because nothing can stop them other than a moderator action, which is more extreme response than a downvote.
EDIT:
I have read your posts (and downvoted one of them), and… sigh. You seem to be yet another Buddhist coming here to preach. You don’t explain, you don’t provide evidence, you just declare you beliefs as a fact. You either haven’t read the Sequences, or you completely missed the point. This is exactly what downvotes are for.
That puts your complaint about “stagnant, narrow worldview” into a completely different light.
Even the Mormons give you some help before pointing you to their book.
I wish we had the same manpower Mormons do, then we could afford to give a more detailed feedback to everyone. To win verbal debates, it would also help to have a set of frequently practiced short replies to most common arguments, but winning verbal debates is not why we are here, and explaining concepts takes more time.
Are you a sect using “rationality” as a free pass?
Maybe. (I mean, if I said “no”, would you believe me?) But this is probably the website most obsessed with rationality on the entire internet. Although that’s mostly because most other websites don’t care at all. This website started over a decade ago as a blog focusing on artificial intelligence and human rationality; the idea was roughly that we need to think more clearly in order to build machines smarter than us and survive doing that. As opposed to the alternative strategies, such as “hoping that everything will magically turn out to be alright” or “choosing your favorite keyword and insisting that this is the key to making everything alright”. (Looking at your posts written so far, your favorite keyword seems to be “non-dual language”.)
I will try to give you a more specific feedback, but the mistakes are subtle; I am not sure whether this explanation may help. I think the largest mistake is that you already start with jumping to conclusions without justifying them. You come here saying (I am paraphrasing here) “obviously, speaking the kind of language that makes a difference between the one who is speaking and the rest of the environment is a mistake, actually it is the mistake, and if you abandon this kind of language, it will dramatically increase AI safety”. Which is an interesting idea, if you can provide evidence for it. But you just seem to take it for granted, and expect the audience to play along.
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
...these are the basic objections I got after five minutes of thinking about the topic. But generally, going in the direction of Sapir-Whorf hypothesis is a red flag.
(Relevant chapters from the Sequences that could have prevented you from making this mistake: Applause Lights—about being enthusiastic about a concept withing having a detailed plan about how the concept would apply to the situation at hand; Detached Lever Fallacy—about why it is a mistake to assume that if doing something produces a certain reaction at humans, it will necessarily produce the same reaction at a machine.)
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.
The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!
People have to make a tradeoff between many different options, one of which is providing criticism or explanations for downvotes. I guess there could be a simple “lurk moar”, “read the sequences” or “get a non-sycophantic LLM to provide you feedback” (I recommend Kimi k2) buttons, but at the end of the day downvotes allow for filtering bad content from good. By Sturgeon’s law there is too much bad content to give explanations for all of it.
Just downvoting a post without context feels unkind, unless kindness is not among the goals. Also, if you spent some time assessing a 5-minute post, then you can probably spend 10 seconds writing some words to explain the downvote.
Being slightly and illegibly unkind is somewhat effective at driving people away (or even motivating a change in behavior, but that’s harder) without too much drama (there are ways of expressing this in person, but online you need some UI feature to substitute), which is occasionally very useful. With downvoting, this doesn’t happen that much when contributions tend to be good, and happens often when not, so directionally it should be working well.
And the alternative to explaining-rather-than-downvoting (which is slightly inconvenient and therefore rarely happens) is mostly not providing any feedback, which in the long run lets the site content drift where the readers wouldn’t want it to.
I don’t see why the site’s content would drift if you simply ignore posts. People would eventually stop posting on their own. So I don’t think that justifies being unkind or driving people right away.
In the end, I don’t think it matters much whether a post has +500, 0, or −500. The score doesn’t seem to be an accurate reflection of quality or of what readers actually want. To be honest, it feels more like it fosters some sort of “smart” bias and tribalism. Probably a range −10/10 is more than enough.
If you feel strongly, positively or negatively, about a post, you should be able to take a moment to express that in a few words.
The only downside I can imagine is that it would take up more disk space since there would be more posts (if I understand the algorithm correctly). But that’s a trivial amount of space.
On the otherside, you get the benefits of boosting the amount comments and normalising the votes. It seems there are always much more votes than comments, possibly creating an echo chamber.
I agree that it’s better. I often try to explain my downvotes, but sometimes I think it’s a lost cause so I downvote for filtering and move on. Voting is a public good, after all.
I think it undersells the hard work of the moderators behind the scenes to think that merely setting up the karma system reinforces the good worldview. See the Well-Kept Gardens post for the official stance of why keeping out unproductive distractions that disrupt the community is necessary.
Many people here are majorly not interested in dissenting viewpoints to AI alignment or human genetic engineering or utilitarianism. If not for downvotes they will find some other way to shoo you away.
My first impression of LessWrong is that allowing downvotes (and even upvotes) without any explanation risks making the site stagnant, reinforcing one narrow worldview without ever acknowledging it.
I can relate to the feeling. Whenever something I posted got downvoted without comment, I wondered about the reasons. Without comment, what can the poster learn from the downvotes? It feels like being sent away. Which it might. But that’s how a community maintains its standards—for better or worse. I think you point out the ”...or worse.” I think it is a risk maybe worth taking. The alternative is Well-Kept Gardens Die By Pacifism.
Thanks. I think that oftentimes when I downvote without giving a reason, it feels like backstabbing. So I try to put it into words, and then I realise that I might just be biased and end up cancelling the downvote.
It could also be the case that you either die by pacifism or by stagnation. Nothing lasts, so maybe it’s just about choosing how you want to die at a particular moment. Given our current high-stakes times, it might be wise to reflect on how you want to face that. I’m glad that a lot of AI safety research is happening here, and not only in the (much more) walled gardens of academia.
We are trying to enforce certain norms of reasoning (rationality, trying to draw a map that better reflects the territory) that go against the human nature (cognitive biases, appeals to social instincts, clever rhetoric). From many perspective, this alone is already a narrow worldview.
When I have the time and patience, I try to explain my downvotes. But sometimes I don’t, and it still feels important to me to downvote the obvious violations of the local norms.
I have no idea what the “create new account” dialog looks like these days, but perhaps it should include an explicit warning that you are supposed to read the Sequences and post accordingly… or get downvoted. It is okay to disagree (sometimes is even a way to get many upvotes). It is not okay to just ignore the norms because you are not even aware of them.
Web discussions without downvotes typically end up with some people saying the same stupid things all the time, because nothing can stop them other than a moderator action, which is more extreme response than a downvote.
EDIT:
I have read your posts (and downvoted one of them), and… sigh. You seem to be yet another Buddhist coming here to preach. You don’t explain, you don’t provide evidence, you just declare you beliefs as a fact. You either haven’t read the Sequences, or you completely missed the point. This is exactly what downvotes are for.
That puts your complaint about “stagnant, narrow worldview” into a completely different light.
Not really talking about Buddhism—just the tags—and those are probably my lasts posts on it. I’m working on an AI safety paper now.
Appreciated.
I wish we had the same manpower Mormons do, then we could afford to give a more detailed feedback to everyone. To win verbal debates, it would also help to have a set of frequently practiced short replies to most common arguments, but winning verbal debates is not why we are here, and explaining concepts takes more time.
Maybe. (I mean, if I said “no”, would you believe me?) But this is probably the website most obsessed with rationality on the entire internet. Although that’s mostly because most other websites don’t care at all. This website started over a decade ago as a blog focusing on artificial intelligence and human rationality; the idea was roughly that we need to think more clearly in order to build machines smarter than us and survive doing that. As opposed to the alternative strategies, such as “hoping that everything will magically turn out to be alright” or “choosing your favorite keyword and insisting that this is the key to making everything alright”. (Looking at your posts written so far, your favorite keyword seems to be “non-dual language”.)
I will try to give you a more specific feedback, but the mistakes are subtle; I am not sure whether this explanation may help. I think the largest mistake is that you already start with jumping to conclusions without justifying them. You come here saying (I am paraphrasing here) “obviously, speaking the kind of language that makes a difference between the one who is speaking and the rest of the environment is a mistake, actually it is the mistake, and if you abandon this kind of language, it will dramatically increase AI safety”. Which is an interesting idea, if you can provide evidence for it. But you just seem to take it for granted, and expect the audience to play along.
For starters, I suppose there is a reason why the “dual language” happened. Would or wouldn’t the same reason also apply to the superhuman artificial intelligence? I mean, if humans could invent something, a superhuman intelligence could probably invent it, too. Does that mean we are screwed when that happens?
Second, suppose that we have succeeded to make the superintelligence see no boundary between itself and everything else, including humans. Wouldn’t it mean that it would treat humans the same way I treat my body when I am e.g. cutting my nails? (Uhm, do people who use non-dual language actually cut their nails? Or do they just cut random people’s nails, expecting that strategy to work on average?) Some people abuse their bodies in various ways, and we have not yet established that the superintelligence would not, so there is a chance that the superintelligence would perceive us as parts of itself and still it would hurt us.
Finally, if the superintelligence sees no difference between itself and me, then there is no harm at lobotomizing me and making me its puppet. I mean, my “I” has always been mere illusion anyway.
...these are the basic objections I got after five minutes of thinking about the topic. But generally, going in the direction of Sapir-Whorf hypothesis is a red flag.
(Relevant chapters from the Sequences that could have prevented you from making this mistake: Applause Lights—about being enthusiastic about a concept withing having a detailed plan about how the concept would apply to the situation at hand; Detached Lever Fallacy—about why it is a mistake to assume that if doing something produces a certain reaction at humans, it will necessarily produce the same reaction at a machine.)
Now to answer some of the questions:
The reason is probably functional; it’s definitely useful to distinguish between agents and agent–environment. Although, I think we forgot that it’s just a useful convention. I think we are screwed if the AI forgets that (sort of the current state) and it is superintelligent (not yet there). On the other hand, superintelligence might entail finding out non-dualism by itself.
Well, cutting your nails is useful for the rest of the body; you don’t want to sacrifice everything for long nails. So, it is quite possible that we end up extinct unless we prove ourselves more useful to the overall system than nails. I do believe we have that in us, as it’s not a matter of quantity but of quality.
The ‘I’ of the AI is an illusion as well, so it will probably have some empathy and compassion for us, or just be indifferent to that fact.
The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!
People have to make a tradeoff between many different options, one of which is providing criticism or explanations for downvotes. I guess there could be a simple “lurk moar”, “read the sequences” or “get a non-sycophantic LLM to provide you feedback” (I recommend Kimi k2) buttons, but at the end of the day downvotes allow for filtering bad content from good. By Sturgeon’s law there is too much bad content to give explanations for all of it.
(I’ve weakly downvoted your comment.)
IMO it’s better to either:
1- downvote and explain why,
2- upvote just a negative comment, or
3- simply not downvote at all.
Just downvoting a post without context feels unkind, unless kindness is not among the goals. Also, if you spent some time assessing a 5-minute post, then you can probably spend 10 seconds writing some words to explain the downvote.
Being slightly and illegibly unkind is somewhat effective at driving people away (or even motivating a change in behavior, but that’s harder) without too much drama (there are ways of expressing this in person, but online you need some UI feature to substitute), which is occasionally very useful. With downvoting, this doesn’t happen that much when contributions tend to be good, and happens often when not, so directionally it should be working well.
And the alternative to explaining-rather-than-downvoting (which is slightly inconvenient and therefore rarely happens) is mostly not providing any feedback, which in the long run lets the site content drift where the readers wouldn’t want it to.
I don’t see why the site’s content would drift if you simply ignore posts. People would eventually stop posting on their own. So I don’t think that justifies being unkind or driving people right away.
In the end, I don’t think it matters much whether a post has +500, 0, or −500. The score doesn’t seem to be an accurate reflection of quality or of what readers actually want. To be honest, it feels more like it fosters some sort of “smart” bias and tribalism. Probably a range −10/10 is more than enough.
If you feel strongly, positively or negatively, about a post, you should be able to take a moment to express that in a few words.
The only downside I can imagine is that it would take up more disk space since there would be more posts (if I understand the algorithm correctly). But that’s a trivial amount of space.
On the otherside, you get the benefits of boosting the amount comments and normalising the votes. It seems there are always much more votes than comments, possibly creating an echo chamber.
I agree that it’s better. I often try to explain my downvotes, but sometimes I think it’s a lost cause so I downvote for filtering and move on. Voting is a public good, after all.
I think it undersells the hard work of the moderators behind the scenes to think that merely setting up the karma system reinforces the good worldview. See the Well-Kept Gardens post for the official stance of why keeping out unproductive distractions that disrupt the community is necessary.
Many people here are majorly not interested in dissenting viewpoints to AI alignment or human genetic engineering or utilitarianism. If not for downvotes they will find some other way to shoo you away.