I don’t think the problem is forgetting there exists other arguments, it’s confronting whether an argument like “perpetuates colonialism” dominates concerns like “usability.” I’d like to know how you handle arguing for something like “usability” in the face of a morally urgent argument like “don’t be eurocentric.”
I’d like to know how you handle arguing for something like “usability” in the face of a morally urgent argument like “don’t be eurocentric.”
I would probably start with rejecting the premise that I have to listen to other people’s arguments.
(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)
I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying as a rationalist can make it even worse, if you conflate “being rational” with “winning verbal fights”. The trick is to realize that it is perfectly possible to hear a verbal argument against doing X, and then do X anyway (without having to win the verbal fight first, or at all).
“Don’t be eurocentric” is not an urgent problem at all, “Don’t be needlessly inefficient just to virtual signal group affiliations” is an even bigger problem in the grand scheme of things, what if that user never gets to use the app because he never manages to understand the UI? also most developers aren’t in a good enough position in the market where they can manage to lose users by such a trivialities
Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff—everything is either 0 or the next level up of infinity?
I do believe the “eurocentric” argument is a manifestation of moloch, it is the new version of “x is the next hitler” or “y was done by the nazis”, it can be used to dismiss any argument coming from the west and to justify almost anything, for example it could be used by china or any latin american country putting an AGI in the government by saying: “AI safety is an eurocentric concept made to perpetuate western hegemony”
So as a rule of thumb, I refuse to giving anyone saying that the benefit of the doubt, in my model anyone using that argument has a hidden agenda behind it and even if they don’t, the false positives are not enough to change my mind, it’s a net positive personal policy, sorry not sorry
It certainly depends on who’s arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that’s perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.
(I admit though that “perpetuates colonialism” is a spin that goes beyond “it’s not a 1:1 mapping” and is more convincing to me)
Well, conversely, do you have examples that don’t involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don’t think the problem is an “any reason” phenomenon, it’s breaking out of the terrain where the further reasons are presumed trivial.
I don’t think the problem is forgetting there exists other arguments, it’s confronting whether an argument like “perpetuates colonialism” dominates concerns like “usability.” I’d like to know how you handle arguing for something like “usability” in the face of a morally urgent argument like “don’t be eurocentric.”
I would probably start with rejecting the premise that I have to listen to other people’s arguments.
(This makes even more sense when we know that the people who loudly express their opinions are often just a tiny minority of users. However, it is perfectly possible to ignore the majority, too.)
I think this is a mistake that many intelligent people make, to believe that you need to win verbal fights. Perhaps identifying as a rationalist can make it even worse, if you conflate “being rational” with “winning verbal fights”. The trick is to realize that it is perfectly possible to hear a verbal argument against doing X, and then do X anyway (without having to win the verbal fight first, or at all).
“Don’t be eurocentric” is not an urgent problem at all, “Don’t be needlessly inefficient just to virtual signal group affiliations” is an even bigger problem in the grand scheme of things, what if that user never gets to use the app because he never manages to understand the UI? also most developers aren’t in a good enough position in the market where they can manage to lose users by such a trivialities
Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff—everything is either 0 or the next level up of infinity?
I do believe the “eurocentric” argument is a manifestation of moloch, it is the new version of “x is the next hitler” or “y was done by the nazis”, it can be used to dismiss any argument coming from the west and to justify almost anything, for example it could be used by china or any latin american country putting an AGI in the government by saying: “AI safety is an eurocentric concept made to perpetuate western hegemony”
So as a rule of thumb, I refuse to giving anyone saying that the benefit of the doubt, in my model anyone using that argument has a hidden agenda behind it and even if they don’t, the false positives are not enough to change my mind, it’s a net positive personal policy, sorry not sorry
It certainly depends on who’s arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that’s perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.
(I admit though that “perpetuates colonialism” is a spin that goes beyond “it’s not a 1:1 mapping” and is more convincing to me)
Well, conversely, do you have examples that don’t involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don’t think the problem is an “any reason” phenomenon, it’s breaking out of the terrain where the further reasons are presumed trivial.
Some further examples:
Past me might have said: Apple products are “worse” because they are overpriced status symbols
Many claims in politics, say “we should raise the minimum wage because it helps workers”
We shouldn’t use nuclear power because it’s not really “renewable”
When AI lab CEOs warn of AI x risk we can dismiss that because they might just want to build hype
AI cannot be intelligent, or dangerous, because it’s just matrix multiplications
One shouldn’t own a cat because it’s an unnatural way for a cat to live
Pretty much any any-benefit mindset that makes it into an argument rather than purely existing in a person’s behavior