“Lizardman” is defined to be a boogeyman, and it is implicitly assumed that the reader will agree with you on this. You are trying to overcome the 4% “problems are said to be coming from a small minority” argument penalty. If the 4% referred to the top 4% most-politically-powerful elite or top 4% richest people, you might have an advantage here, but alas, lizardman is implied to be somewhere near the lowest class.
In Scott’s posts about this subject, I recall that he seems more dismissive of lizardman in general, chalking it up to potentially spurious errors in data collection, or people who just felt like answering weirdly that day. Ultimately, that it didn’t necessarily correlate to the same 4% of people each time.
Your argument that most of society’s constructs are defenses that are specifically built for defending against weirdos is not very convincing. It’s not obvious why we’d expect that 4% to have the ability to cause social collapses of great magnitude, as opposed to, say, larger groups of people who perpetuate flawed or incorrect beliefs memetically that are difficult to dislodge, for example.
This might be more similar to the last point, but I don’t buy in general the argument that small numbers of individual people who have worse ideas will have an easier time influencing people because of social media, or something like that. I think your post at least implicitly argues or implies that in general, bad ideas somehow transmit more easily than good ones, whether you explicitly believe this or not.
This might be more similar to the last point, but I don’t buy in general the argument that small numbers of individual people who have worse ideas will have an easier time influencing people because of social media, or something like that.
I think the key mechanism behind bad ideas being more influential than good ideas is that we tend to have a bias to over update on negative news, and social media, as well as the news enables our biased opinions to be shared towards the world, which is almost never the good opinion.
I had to think on this point a while; I’ve seen you mention it elsewhere too.
Yeah, I think maybe you’re right about bad ideas at least propagating more widely than good ideas. I’m not sure if we over-update on them per se, but I do notice that they get signal-boosted much more often. I assume this is because they need to be in order to survive better (better sounding ideas would be more easily taken by definition).
I’m not sure what the mechanism would be to cause people to actually update on them more readily than good news. I have some thoughts, but they are more complicated. Basically, they amount to there being tribal in-groups who need the outgroup to be wrong, and therefore update on negative news, since the outgroup is larger and external (and thus negative news would more likely apply to it).
“Bad thing happened, ingroup right, outgroup wrong, we told them so, etc.”
Here’s why I don’t find your argument compelling:
“Lizardman” is defined to be a boogeyman, and it is implicitly assumed that the reader will agree with you on this. You are trying to overcome the 4% “problems are said to be coming from a small minority” argument penalty. If the 4% referred to the top 4% most-politically-powerful elite or top 4% richest people, you might have an advantage here, but alas, lizardman is implied to be somewhere near the lowest class.
In Scott’s posts about this subject, I recall that he seems more dismissive of lizardman in general, chalking it up to potentially spurious errors in data collection, or people who just felt like answering weirdly that day. Ultimately, that it didn’t necessarily correlate to the same 4% of people each time.
Your argument that most of society’s constructs are defenses that are specifically built for defending against weirdos is not very convincing. It’s not obvious why we’d expect that 4% to have the ability to cause social collapses of great magnitude, as opposed to, say, larger groups of people who perpetuate flawed or incorrect beliefs memetically that are difficult to dislodge, for example.
This might be more similar to the last point, but I don’t buy in general the argument that small numbers of individual people who have worse ideas will have an easier time influencing people because of social media, or something like that. I think your post at least implicitly argues or implies that in general, bad ideas somehow transmit more easily than good ones, whether you explicitly believe this or not.
I think the key mechanism behind bad ideas being more influential than good ideas is that we tend to have a bias to over update on negative news, and social media, as well as the news enables our biased opinions to be shared towards the world, which is almost never the good opinion.
I had to think on this point a while; I’ve seen you mention it elsewhere too.
Yeah, I think maybe you’re right about bad ideas at least propagating more widely than good ideas. I’m not sure if we over-update on them per se, but I do notice that they get signal-boosted much more often. I assume this is because they need to be in order to survive better (better sounding ideas would be more easily taken by definition).
I’m not sure what the mechanism would be to cause people to actually update on them more readily than good news. I have some thoughts, but they are more complicated. Basically, they amount to there being tribal in-groups who need the outgroup to be wrong, and therefore update on negative news, since the outgroup is larger and external (and thus negative news would more likely apply to it).
“Bad thing happened, ingroup right, outgroup wrong, we told them so, etc.”
I don’t believe I made that argument.