I’m unsure whether I should upvote this post or not. Much of it seems to raise valid points. But towards the end you write:
You (LW) may dislike this. You can provide me with a poll results informing me that you dislike this, if you wish (This is pretty silly if you ask me; you think you are rating me, but clearly, if I am not mentally handicapped individual, all that does is providing me with pieces of information which I can use to many purposes besides self evaluation; I self evaluate by trying myself on practical problems, or when I actually care.).
This comes across as a bit passive-aggressive with a bit of the standard “if you downvote me, I win” sort of symptom which is common among people who are either set in their ways or just trying to troll. I don’t get that impression at all from the rest of the post, but this bit seems to signal that strongly. It may make sense to rewrite that paragraph or delete it entirely.
I’m unsure whether I should upvote this post or not. Much of it seems to raise valid points.
If the articles provides more good than bad (as in 10 well-articulated objections versus 1 short trolling paragraph), I guess it still deserves an upvote.
I hate karma games, and usually automatically downvote any article or comment that speaks about its own karma. But this article has enough useful content that I made an exception of this rule. (It also helped that the article is rather long, so I skipped the offending paragraph.)
I disagree with the idea that critisicm is downvoted here, unless it happens to be badly written criticism, and I consider this accusation very unfair. However I upvoted this article, not to provide some kind of “balance” or “diversity”, but as my honest judgement of quality of the first 90% of its text.
EDIT: I also very much liked the idea that a self-improving AI would probably wirehead itself. It never occured to me, and it makes a lot of sense. (However, if I hope that future humans will be able to resist wireheading, it makes sense to worry that some AIs will manage to resist wireheading too.)
I also very much liked the idea that a self-improving AI would probably wirehead itself. It never occured to me, and it makes a lot of sense.
This idea is intuitively plausible, but doesn’t hold up when considering rational actors that value states of the world instead of states of their minds. Consider a paperclip maximizer, with the goal “make the number of paperclips in the universe as great as possible”. Would it rather a) make paperclips, or b) wirehead to convince itself that the universe is already full of paperclips? Before it wireheads, it knows that option a) will lead to more paperclips, so it does that. Similarly, I would rather actually help people than feel the warm glow that comes from helping people without any actual helping.
Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between ‘the world now’ and ‘the world it wants to get to,’ the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it’s aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.
This comes across as a bit passive-aggressive with a bit of the standard “if you downvote me, I win” sort of symptom which is common among people who are either set in their ways or just trying to troll.
I consider this to be a problem with reputation systems rather than with the people who raise that point.
I think his point is absolutely valid. What he is saying is that reputation systems, like the one used on Less Wrong, allow for an ambiguous interpretation of the number they assign to content. That downvotes mean that he is objectively wrong is just one unlikely interpretation, given the selection pressure such reputation systems cause and the human bias towards group think.
I find the interpretation scheme “net downvotes mean more people want less content like this than want more content like this; net upvotes mean the reverse” to be fairly unambiguous.
Sure, it would be nice to have an equally unambiguous indicator of something I care about more (like, for example, the objective wrongness of a statement). Reputation systems aren’t that. Anyone who believes they are, is mistaken. Anyone who expects them to act as though they were and pays attention will be disappointed.
There are millions of other things that would be nice to have that reputation systems aren’t, also.
Can you link to any single instance of that? All I’ve seen here is intelligent criticism being massively upvoted, and a few instances of unintelligent, incomprehensible, or repetitive (e.g. too many posts on oracle AI) criticism being downvoted to −5 or so.
It looks like that comment had fluctuating karma both before and after today—it’s hardly mass-downvoting if the comment never goes below −1. Also, AFAICT the people who downvoted were doing it because they thought Dmytry was confused about evolution and/or hill-climbing algorithms. I don’t know enough about hill-climbing to say for sure if he was making a mistake worthy of downvoting.
That said, I have updated in favor of people sometimes downvoting for disagreement, and I don’t approve of that. I generally try to avoid it myself. For instance, I haven’t downvoted anything from Dmytry that I disagree with, because I think his points are intelligent enough to be worth making. Thanks for pointing out that example.
Please note that mass-downvoting has happened to others recently. I would hope that sockpuppetry hasn’t become an accepted mode of social discourse here (per Reddit), but it may be too late.
I’m unsure whether I should upvote this post or not. Much of it seems to raise valid points. But towards the end you write:
This comes across as a bit passive-aggressive with a bit of the standard “if you downvote me, I win” sort of symptom which is common among people who are either set in their ways or just trying to troll. I don’t get that impression at all from the rest of the post, but this bit seems to signal that strongly. It may make sense to rewrite that paragraph or delete it entirely.
If the articles provides more good than bad (as in 10 well-articulated objections versus 1 short trolling paragraph), I guess it still deserves an upvote.
I hate karma games, and usually automatically downvote any article or comment that speaks about its own karma. But this article has enough useful content that I made an exception of this rule. (It also helped that the article is rather long, so I skipped the offending paragraph.)
I disagree with the idea that critisicm is downvoted here, unless it happens to be badly written criticism, and I consider this accusation very unfair. However I upvoted this article, not to provide some kind of “balance” or “diversity”, but as my honest judgement of quality of the first 90% of its text.
EDIT: I also very much liked the idea that a self-improving AI would probably wirehead itself. It never occured to me, and it makes a lot of sense. (However, if I hope that future humans will be able to resist wireheading, it makes sense to worry that some AIs will manage to resist wireheading too.)
This idea is intuitively plausible, but doesn’t hold up when considering rational actors that value states of the world instead of states of their minds. Consider a paperclip maximizer, with the goal “make the number of paperclips in the universe as great as possible”. Would it rather a) make paperclips, or b) wirehead to convince itself that the universe is already full of paperclips? Before it wireheads, it knows that option a) will lead to more paperclips, so it does that. Similarly, I would rather actually help people than feel the warm glow that comes from helping people without any actual helping.
Easier said than done. Valuing state of the world is hard; you have to rely on senses.
Well, yes, but behind the scenes you need a sensible symbolic representation of the world, with explicitly demarcated levels of abstraction. So, when the system is pathing between ‘the world now’ and ‘the world it wants to get to,’ the worlds in which it believes there are a lot of paperclips are in very different parts of state space than the worlds which contain the most paperclips, which is what it’s aiming for. Being unable to differentiate would be a bug in the seed AI, one which would not occur later if it did not originally exist.
I consider this to be a problem with reputation systems rather than with the people who raise that point.
I think his point is absolutely valid. What he is saying is that reputation systems, like the one used on Less Wrong, allow for an ambiguous interpretation of the number they assign to content. That downvotes mean that he is objectively wrong is just one unlikely interpretation, given the selection pressure such reputation systems cause and the human bias towards group think.
I find the interpretation scheme “net downvotes mean more people want less content like this than want more content like this; net upvotes mean the reverse” to be fairly unambiguous.
Sure, it would be nice to have an equally unambiguous indicator of something I care about more (like, for example, the objective wrongness of a statement). Reputation systems aren’t that. Anyone who believes they are, is mistaken. Anyone who expects them to act as though they were and pays attention will be disappointed.
There are millions of other things that would be nice to have that reputation systems aren’t, also.
I read this as a reference to past criticisms having been met with really obvious mass-downvoting.
Can you link to any single instance of that? All I’ve seen here is intelligent criticism being massively upvoted, and a few instances of unintelligent, incomprehensible, or repetitive (e.g. too many posts on oracle AI) criticism being downvoted to −5 or so.
Since you ask: I noted here an example of answering the actual question getting a downvote. (And the fact of me noting it got downvoted too.)
edit: at time of making this comment, the linked comment and the comment it points to were both at −1.
It looks like that comment had fluctuating karma both before and after today—it’s hardly mass-downvoting if the comment never goes below −1. Also, AFAICT the people who downvoted were doing it because they thought Dmytry was confused about evolution and/or hill-climbing algorithms. I don’t know enough about hill-climbing to say for sure if he was making a mistake worthy of downvoting.
That said, I have updated in favor of people sometimes downvoting for disagreement, and I don’t approve of that. I generally try to avoid it myself. For instance, I haven’t downvoted anything from Dmytry that I disagree with, because I think his points are intelligent enough to be worth making. Thanks for pointing out that example.
Please note that mass-downvoting has happened to others recently. I would hope that sockpuppetry hasn’t become an accepted mode of social discourse here (per Reddit), but it may be too late.