A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.
A little Singularitarianism, specifically the assumption that self-improving AI = InstantGod®, and that donating to SIAI is the best possible EV for your cash. This isn’t a big deal because they tend to be confined to their own threads. (Also, in the thankfully rare instance that someone brings up the Friendly AI Rapture even when it brings nothing to the conversation, I get to have fun righteously snarking at them, and usually get cheap karma too, perhaps from the other non-Singularitarians like me.) But it does make me feel less attached and sympathetic to other LessWrongers.
Of late, there’s a lot of concern about what content should be on this site and about how to promote the site and its mentality to the ‘muggles’. This kind of puzzles me, because I treat LW as just a place where mostly smart INTJ people hang out and flex their philosophical muscles when they feel like it, and I don’t feel particularly interested in missionary work. While I do find it desirable to make more people more rational, I thought everyone here—except for those who get their paycheck from SIAI/FHI, I guess—had better and more efficient purposes to which to dedicate their precious, precious willpower-to-do-stuff-I-don’t-enjoy than writing posts they don’t really feel like writing. If providing “hardcore” content to LW feels like a chore, then we have a tragedy of the commons situation at hand, and is the site important enough to implement one of the standard workarounds to it?
Eliezer’s reduced presence. Other contributors’ posts are even more productive and useful than his, but none are quite as enjoyable to read.
Some top contributors regularly get double-digit karma for utterly trivial comments. Can’t think of a fix that would be less annoying than the issue.
More Anglo prevalence than I would have expected for a site like this.
More Anglo prevalence than I would have expected for a site like this.
Are there any English-language discussion sites that aren’t very Anglo-centric? The more troubling thing for me is the feel that we’re just bouncing around ideas that flow out of Silicon Valley instead of having multiple cultural centers generating new ideas with their own slant on stuff and having a back-and-forth. There could be interesting communities that are in Russian, Chinese, German, French or Spanish which are producing interesting ideas and could be aligned with LW if someone would bother to translate stuff, or then we could just be in a situation where the interesting new stuff that’s roughly compatible with the LW meme cluster just happens to emerge mostly from the Anglosphere.
The split between analytic philosophy done in English and continental philosophy done in French and German is a bit similar. And that seems to have led into mutual unintelligibility at some conceptual level, not because of language. As far as I can tell, the two schools of philosophy don’t have much use or appreciation for each others’ stuff even when it gets translated. There seems to be some weird deep intertwining going on with language, culture and the sort of philosophy that gets produced, and LW stuff might be subject to it as well.
It’s odd in general that I feel like I have a much better idea about what’s going on in the US than in most of Europe since the primary language of most Americans is one I can understand and the primary language of most Europeans is one I can’t.
A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged.
Altruism is a common consequence of utilitarian ideas, but it’s not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it’s the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency.
Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but—with only a couple of exceptions, which include the OP—people just automatically skip to wondering “how could I save the most people?” (decision theory talk), or “what counts as ‘people’, i.e. those units of which I should obviously try to save as many as possible?”. There’s an implicit assumption that any sentient being whatsoever = 1 ‘moral weight unit’, and it’s as simple as that. To me, that’s insane.
Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.
Fair enough; I guess komponisto’s comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.
A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.
In roughly decreasing order of annoyance:
A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.
A little Singularitarianism, specifically the assumption that self-improving AI = InstantGod®, and that donating to SIAI is the best possible EV for your cash. This isn’t a big deal because they tend to be confined to their own threads. (Also, in the thankfully rare instance that someone brings up the Friendly AI Rapture even when it brings nothing to the conversation, I get to have fun righteously snarking at them, and usually get cheap karma too, perhaps from the other non-Singularitarians like me.) But it does make me feel less attached and sympathetic to other LessWrongers.
Of late, there’s a lot of concern about what content should be on this site and about how to promote the site and its mentality to the ‘muggles’. This kind of puzzles me, because I treat LW as just a place where mostly smart INTJ people hang out and flex their philosophical muscles when they feel like it, and I don’t feel particularly interested in missionary work. While I do find it desirable to make more people more rational, I thought everyone here—except for those who get their paycheck from SIAI/FHI, I guess—had better and more efficient purposes to which to dedicate their precious, precious willpower-to-do-stuff-I-don’t-enjoy than writing posts they don’t really feel like writing. If providing “hardcore” content to LW feels like a chore, then we have a tragedy of the commons situation at hand, and is the site important enough to implement one of the standard workarounds to it?
Eliezer’s reduced presence. Other contributors’ posts are even more productive and useful than his, but none are quite as enjoyable to read.
Some top contributors regularly get double-digit karma for utterly trivial comments. Can’t think of a fix that would be less annoying than the issue.
More Anglo prevalence than I would have expected for a site like this.
No Auto-Pager script for people’s histories.
Are there any English-language discussion sites that aren’t very Anglo-centric? The more troubling thing for me is the feel that we’re just bouncing around ideas that flow out of Silicon Valley instead of having multiple cultural centers generating new ideas with their own slant on stuff and having a back-and-forth. There could be interesting communities that are in Russian, Chinese, German, French or Spanish which are producing interesting ideas and could be aligned with LW if someone would bother to translate stuff, or then we could just be in a situation where the interesting new stuff that’s roughly compatible with the LW meme cluster just happens to emerge mostly from the Anglosphere.
The split between analytic philosophy done in English and continental philosophy done in French and German is a bit similar. And that seems to have led into mutual unintelligibility at some conceptual level, not because of language. As far as I can tell, the two schools of philosophy don’t have much use or appreciation for each others’ stuff even when it gets translated. There seems to be some weird deep intertwining going on with language, culture and the sort of philosophy that gets produced, and LW stuff might be subject to it as well.
It’s odd in general that I feel like I have a much better idea about what’s going on in the US than in most of Europe since the primary language of most Americans is one I can understand and the primary language of most Europeans is one I can’t.
This simply isn’t true. See, for example, the reception of this post.
Altruism is a common consequence of utilitarian ideas, but it’s not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it’s the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency.
Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but—with only a couple of exceptions, which include the OP—people just automatically skip to wondering “how could I save the most people?” (decision theory talk), or “what counts as ‘people’, i.e. those units of which I should obviously try to save as many as possible?”. There’s an implicit assumption that any sentient being whatsoever = 1 ‘moral weight unit’, and it’s as simple as that. To me, that’s insane.
Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.
Fair enough; I guess komponisto’s comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.
Please do. @#%@#$ utilitarianism.