Trolling usually means disrupting the flow of discussion by deliberate offensive behaviour towards other participants. It usually doesn’t denote proposing a thought experiment with a possible solution that is likely to be rejected for its offensiveness. But this could perhaps be called “trolleying”.
prase
Prisoner’s Dilemma Tournament Results
There are probably several things where I would broadly agree with you, however your post would be much better without the condescending tone. And perhaps without all the non sequiturs:
If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.
Why? If people don’t believe that cryonics will work, you can’t sell it to them for a lot of money even if they are wrong. (Disclaimer: I haven’t signed for cryonics.)
How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed?
If you believed that there is going to be a nuclear war in 90 years, would you start buying the food and preparing the shelter just now?
The risks listed in the survey results were pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, economic/political collapse, pandemic (natural), nanotech, asteroid. Few of them could be short term catastrophes with critical first few weeks that can be survived in a shelter, but not necessarily. If we are speaking about a disaster wiping out 90% of the global population or more, it’s pretty good to assume that lot of people are thinking of an event which renders Earth unlivable, with a shelter or without it.
People can prefer death to living in a post-apocalypse world. (Or prefer “normal” pre-apocalyptic life and then death to life spent in preparation for the apocalypse and survival.)
The question was “which disaster do you think is most likely...”. Therefore, if 23% answer bioengineered pandemics, it doesn’t imply that 23% of people actually consider bioengineered pandemics probable. It can mean that it is less improbable than the rest of the list.
That no more than 5% of LW readers are preparing a shelter (likely a correct guess) is an argument for what, exactly? It can be evidence that the general LW opinion is actually closer to yours than you seem to believe, or it can be evidence that people are procrastinating, but it certainly doesn’t imply “grand level of overconfidence in the probabilities of any of these [catastrophes] occurring”.
(Disclaimer: I don’t especially fear future global catastrophes and moreover don’t think that we can predict them significantly better than by random guess.)
The questions on dust specks vs torture and Newcomb’s Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.
Relevant to what? It seems that those discussions were intended as illustrations of theoretical problems with common utilitarian and decision-theoretic intuitions. Learning that one’s intuitions have bounded domain and don’t work well in extreme unrealistic scenarios isn’t perhaps a life changing achievement, but it is at least interesting. Perhaps not interesting to you, but not interesting to you and worthless are different things. (Disclaimer: I don’t think that having correct answer to Newcomb and dust specks is going to be practically important in and of itself.)
Karma threshold for meetup organisers?
In my defense, we’ve had other elementary posts before, and they’ve been found useful; plus, I’d really like this to be online somewhere, and it might as well be here.
It’s quite interesting that people feel a need to defend themselves in advance when they think their post is elementary, but almost never feel the same obligation when the post is supposedly too hard, or off-topic, or inappropriate for other reason. More interesting given that we all have probably read about the illusion of transparency. Still, seems that inclusion of this sort of signalling is irresistible, although (as the author’s own defense has stated) the experience tells us that such posts usually meet positive reception.
As for my part of signalling, this comment was not meant as a criticism. However I find it more useful if people defend themselves only after they are criticised or otherwise attacked.
- 17 Aug 2010 13:05 UTC; 6 points) 's comment on Newcomb’s Problem: A problem for Causal Decision Theories by (
On Debates with Trolls
So, being able to observe that one behaviour causes the desired outcome more often than another behaviour counts as reasoning using Bayes Theorem? On this level of vagueness we could proclaim children natural frequentists, or Popperian falsificationists, or whatever else with equal ease.
The children adjusted their hypotheses appropriately when they saw the statistical data
Using such words to describe small children trying to light up a toy makes me suspect that this post is a parody.
Reading this, I wonder why a LDS missionary got interested in a rationalist community which is generally hostile to religion. I would appreciate some explanation about the author’s motivations. Strictly speaking, this is irrelevant to the message, but being confused about one’s aims somewhat lowers my trust in one’s suggestions. This is not to say that the suggestions themselves are suspicious or clearly wrong—on the contrary, they are written in an impartial style that very well fits into LW customs, which makes me even more curious about the author’s background.
On a slightly different note, there is only so much one can improve on the organisational level, and we should keep in mind what expectations do we have from this community. In a sense, I second the Vladimir Nesov’s and cousit_it’s comments. Community building is instrumentally important, but really shouldn’t become a major terminal value if we want to maintain a high level of rationality. Aspiring rationalists are in a danger of wandering in a strange circle: at first, they crave for getting rid of common biases and developing abilities for efficient truth seeking. But then, these very abilities lead them to discover that sometimes, an efficient way to do things is to compromise with human biology and use techniques which cooperate with the biases instead of fighting them. Often it turns out that the best techniques are already implemented by somebody—church missionaries for example—just because it is hard to compete with memes perfected by centuries-long testing and evolution.
In short, I am afraid that if we put too much weight into values (as community building) pursued by non-rationalist groups, it is likely that the most efficient methods are possessed by those non-rational groups (like churches) and we should (instrumentally) learn from them (as this series of posts claims). This wouldn’t be itself bad if these methods weren’t optimised for different sets of goals. It is very easy to adopt few manipulation techniques to strengthen the community coherence and be astonished by their efficacy while being completely unaware of all the non-obvious ways how these techniques undermine the main goal, which is rationality. When people start realising that something has gone wrong, it may be too late. We have already the label of dark arts for efficient but dangerous procedures. Some LWers may have already grown enough confident to believe that they can use the dark arts without being harmed. I am not so optimistic.
An Anchoring Experiment: Results
It’s industrial-strength bleach.
It is consumed diluted (I think the vendors suggest to mix it with lemon juice or so) and only few droplets a day, so it’s not that bad as drinking industrial-strength bleach. (There is certain threshold of strength above which the evidence overcomes even the crackpots’ natural immunity. Death or immediately noticeable health problems tend to be above the threshold. There are naturally people who ignore all suggestions and take the stuff in concentrated form, but I suppose they don’t stay in the pool of MMS proponents too long.)
Actually I don’t think MMS is sillier than homoeopathy—although sodium chlorite is a poison, poisons in small concentrations are used in medicine and it has at least a chance of producing some effect, which can’t be said about distilled water.
I have discussed—over the internet—with a person who claimed to be cured from various diseases by MMS, and was very indignant when I said it doesn’t work.
Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable.
Isn’t this true just because the way philosophy is effectively defined? It’s a catch-all category for poorly understood problems which have nothing in common except that they aren’t properly investigated by some branch of science. Once a real question is answered, it no longer feels like a philosophical question; today philosophers don’t investigate motion of celestial bodies or structure of matter any more.
In other words, I wonder what are the fundamentally philosophical questions. The adverb fundamentally creates the impression that those questions will be still regarded as philosophical after being uncontroversially answered, which I doubt will ever happen.
What is the difference between rationality and objectivism?
I have had few discussions with Objectivists and read few other discussions where Objectivists took part and I haven’t seen particularly high level of rationality there. Objectivism as actually practiced is a political ideology with all downsides—fallacious arguments of all kinds, tight connection between beliefs and personal identity, regarding any opposition as a threat to morality by default and so on.
Objectivism as philosophy is a mix of beliefs often mutually incompatible, connected by vague net of equivocations. You may have been mislead by the etymology of “Objectivism” to thinking that belief in objective reality and morality is the distinguishing characteristic belief of Objectivists. But it is not so. To be an Objectivist, you ideally have to agree that
For all X, X=X
The only terminal value is survival.
There are natural human rights to life, property and liberty, and no other rights.
Selfishness is a virtue and altruism is a vice.
Laissez-faire capitalism with minimal to non-existent state is the only moral political system.
All above could be derived step by step by mere logic from the first axiom, no observational data needed.
Ayn Rand was one of the greatest thinkers of 20th century (and perhaps of all history of mankind).
That “there is only one true way of some things” is not a steelman version of Rand’s Objectivism, it’s a vague nearly tautological statement which almost everyone is bound to agree with, Objectivist or not.
it hinders neutral discussion of its relative badness compared to other fallacies
Not only that, but it is also non-descriptive.
I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.
By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog’s header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.
As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn’t happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.
Well, since you have asked for feedback, I may provide some, although probably not of the kind requested by this post.
Your repeated requests for feedback accompanied by links proving that you are able to correct your mistakes and like the corrections … create an impression of some heavy signalling going on. Namely, it’s one of the norms here—and probably also among the readers of your blog—to be able to accept constructive criticism, to avoid confirmation bias, to respect a lot of debate rules etc. Signalling adherence to those norms increases one’s status. But then, if the signalling part is too apparent, it naturally leads to suspicion of hypocrisy and distrust on the meta-level.
Of course this all is obvious. I write it only because I am not sure whether you realise that the way you ask for feedback may appear to fall into this category. Or actually belong to that category. Substituting the usual pride for never being wrong by pride for not being the sort of person who takes pride from never admitting wrongness is a useful mind hack, but still it’s a goal different from being right in the first place.
Not that I could find a single instance where your signalling goals prevented you from finding the truth efficiently. You are certainly not the usual open-mindedness signaller. If there is a danger, it is certainly subtle. Just you don’t need to ask for feedback this way. Valuable criticism is usually spontaneous: when people detect a mistake, they say it (unless the local norms discourage such reactions, but that’s certainly not the case on LW). On the other hand, when requested to do so, people start to hastily search for something to be criticised, and either find an unimportant detail or even construct a non-existent problem packed in a cloak of rationalisations, or they fail to find anything and produce equally useless you-are-so-awesome-so-that-I-can’t-find-a-single-problem response. (Not that it isn’t pleasant to hear the latter.)
In short, if you make a mistake, don’t be afraid we’ll keep it secret.
I have skimmed through the comments here and smelled a weak odour of a flame war. Well, the discussion is still rather civil and far from a flame war as understood on most internet forums, but it somehow doesn’t fit well within what I am used to see here on LW.
The main problem I have is that you (i.e. curi) have repeatedly asserted that the Bayesians, including most of LW users, don’t understand Popperianism and that Bayesianism is in fact worse, without properly explaining your position. It is entirely possible, even probable, that most people here don’t actually get all subtleties of Popper’s worldview. But then, a better strategy may be to first write a post which explains these subtleties and tells why they are important. On the other hand, you don’t need to tell us explicitly “you are unscholarly and misinterpret Popper”. If you actually explain what you ought to (and if you are right about the issue), people here will likely understand that they were previously wrong, and they will do it without feeling that you seek confrontation rather than truth—which I mildly have.
Upvote if your answer is lower.
Upvote this if learning about the new planet full of happy people feels like good news to you.
I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:
LW seems to be slowly becoming self-obsessed. “How do we get better contrarians?” “What should be our debate policies?” “Should discussing politics be banned on LW?” “Is LW a phyg?” “Shouldn’t LW become more of a phyg?” Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare—else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).
If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don’t try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven’t been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.