… Do you ever talk about anything else other than your lack of sexual success? Alright, granted – I saw a few posts from you on cryonics. What would it take to steer you towards posting more of that and less of this? It’s largely off-topic for LW, off-putting as well, and irrelevant to anyone who is not you. I get that it’s something that concerns you deeply, but seriously, try getting advice on that one on a specialised forum.
Dahlen
- 4 Oct 2015 15:10 UTC; 8 points) 's comment on Open thread, Sep. 28 - Oct. 4, 2015 by (
Look, while nothing you’re saying here is particularly objectionable in my opinion (not that I agree, it’s just that the disagreement is not violent), I’ve just gone over your comment history and they were all like “I don’t believe I’ve gained any benefit from reading this post”, “I don’t think there’s much worth in discussing this”, “I’m not very convinced by the arguments made in this post” etc. It goes like this for about half a year.
Which gets me thinking, okay, so you didn’t like LessWrong from the very beginning—but then why spend time on showing this to everybody? It doesn’t make sense to make an account just to periodically express your dissatisfaction with the content posted—I mean, when I believe a website to be boring and useless, I prefer not to bother with it and click the red X instead. Do you do this for every other site you stumble upon and get to dislike? Because that would be quite a lot of time wasted on places that just aren’t worth it.
’Twas about time that I decided to officially join. I discovered LessWrong in the autumn of 2010, and so far I felt reluctant to actually contribute—most people here have far more illustrious backgrounds. But I figured that there are sufficiently few ways in which I could show myself as a total ignoramus in an intro post, right?
I don’t consider my gender, age and nationality to be a relevant part of my identity, so instead I’d start by saying I’m INTP. Extreme I (to the point of schizoid personality disorder), extreme T. Usually I have this big internal conflict going on between the part of me that wishes to appear as a wholly rational genius and the other part, who has read enough psychology and LW (you guys definitely deserve credit for this) to know I’m bullshitting myself big time.
My educational background so far is modest, a fact for which procrastination is the main culprit. I’m currently working on catching up with high school level math… so far I’ve only reviewed trigonometry, so I’m afraid I won’t be able to participate in more technical discussions around here. Aside from a few Khan Academy videos, I’m still ignorant about probability; I did try to solve that cancer probability problem though, and when put like that into a word problem, I used Bayes’ theorem intuitively. (Funny thing is, I still don’t understand the magic behind it, even if I can apply it.) I know no programming beyond really elementary C++ algorithms; I have a pretty good grasp of high school physics, minus relativity and QM. I am seeking to do everything in my power to correct these shortcomings, and when/if I achieve results, I’ll be happy to post my findings about motivation & procrastination on LW, if anyone is interested.
That which I have in common with the rest of this community is a love for rational, intelligent and productive discussions. I’m hugely disappointed with the overwhelming majority of internet and RL debates. Many times I’ve found myself trying to be the voice of reason and pointing out flaws in people’s reasoning, even when I agreed with the core idea, only to have them tell me that I’m being too analytical and that I should… what… close off my mind and stop noticing mistakes, right? So I come here seeking discussions with people who would listen to reason and facilitate intellectually fruitful debates.
I’m very eager to help spread the knowledge about cognitive biases and educate people in the art of good reasoning.
I’m also interested (although not necessarily well-versed, as mentioned above) in most topics people here are interested in—everything concerning mathematics and science, as well as philosophy and the mind (which are, by comparison, my two strongest points).
There are quite a few ways in which I don’t fit the typical LW mold, though, and I’m mentioning this so that I find out whether any of these are going to be problematic in our interaction.
For one, I’m not particularly interested in AI and transhumanism. Not opposed to, just indifferent. The only related topic which interests me is life extension research. In the eventuality that some people might try to change my mind about this from the get-go, as I’ve seen some do with other newbies, I know you probably have some very good arguments for your position, but hopefully nobody’s going to mind one less potential AI enthusiast. My interests are spread thin enough as they are.
I seem to be significantly more left-leaning than the majority of folks here. I’m decidedly not dogmatic about it, though, and on occasion I speak out against heavily ideological discourse even when it has a central message that I agree with.
Kind of clueless and mathematically illiterate at this moment.
This has to be getting rather long, so I’ll stop here, hoping that I’ve said everything that I believed to be relevant to an intro post.
Availability heuristic! … It was the first one that came to mind.
The kinds of things which an upper-class American is prone to believe (which would not garner him favour with other members of society), I suppose. I mean, I’m not expecting him to be secretly yearning for a Communist workers’ paradise. Also he is an entrepreneur with transhumanist sympathies, therefore a forward-thinking guy, so probably the internet crusaders from the opposite camp aren’t bashing his ideas yet—because they haven’t yet conceived of them; you can count on people like him to think in an original way—but probably will be in 20 years from now.
HOWEVER. I take issue with the thing you’re attempting to do with this post. Obviously none of us are Thiel himself; obviously the attempt to guess what Thiel meant is a classic case of grasping at straws; whatever the community can come up with probably isn’t even in the same ballpark as Thiel’s secret heresies. Besides, if I were him, I’d personally be bothered by some random people’s attempts to guess at beliefs I don’t want to make public, for reasons relating to the telephone game that ensues and the risk of other people from other websites misinterpreting those positions as my own. Alas, that is but my own take on this, because I’m not Peter Thiel. Obviously.
I regard this kind of challenge as inflammatory. Even though I remember having made a case for more political discussion on LessWrong, time and again I get reminded how awfully LessWrongers handle political topics, and how badly I had overestimated people’s aptitude at not causing political discussions to degenerate into flame wars. This is worse than the average political discussion. This is an open invitation for people to fill in the blanks with their pet thoughtcrimes, as long as they consider themselves roughly on the same side of the political spectrum as Thiel. It’s going to attract the worst sort of people, and it can harm participants, onlookers, and Thiel himself.
You’re a smart guy, you don’t need me to tell you that we cannot run an accurate simulation of Thiel, and I know from your article publication history that you’re not doing this for inquisitorial purposes, which leaves the intention of drawing attention to his “really good ideas”. However, the man himself (ostensibly) wants the opposite. Which he is in full right to do. So how about we leave him be and refrain from making wild guesses as to what he meant?
Are you foreseeing that Stuart’s baby will eventually make a positive impact by reducing suffering of others?
“The one with the power to vanquish the Dark Lord approaches … born as the seventh month dies …”
In a recent conflict with someone (who seemed to be mad at me for no reason I could agree to), I’ve tried two strategies consecutively: reasonable discussion & mediation techniques, and rage fits (I’ve basically faked being really mad and upset at them to see what would happen; I’m sorry, I know, I was being a manipulative bastard). My faith in humanity took a hit (even though it shouldn’t have) after seeing that this particular person was basically immune to logos but very readily respondent to pathos.
So just a little reminder that may or may not be redundant among here: don’t do this. Don’t give more of a chance to the person who screams and acts crazy than to the person who tries to work things out with you the calm, mature way. It’s exactly the wrong way to respond if you want to incentivize rational behavior on the part of the other party. The message is basically “I won’t listen to any attempt at reasonable discussion, but try going hysterical on me, that one has good odds of success”, thereby earning yourself more hissy fits in the future. And especially don’t do this as parents, to your kids.
(I don’t know why I’m saying this here, it may go without saying for a smart bunch of people like you. Perhaps I’m temporarily under the impression that it is not obvious to everybody how astonishingly stupid it is to be more convinced by pathos than by logos, just because it wasn’t obvious to my IQ<95 acquaintance.)
And possibly a hefty amount of socially unacceptable false things too.
It could’ve been an explanation, but in the end it turned out to be declaring sides.
Seeing as, in terms of absolute as well as disposable income, I’m probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It’s something I know I couldn’t participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn’t afford anything else. Maybe one day when I’ll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain’t got no money, take yo’ broke ass home!
Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good motives and consequences. On the one hand, the fact that there are people dedicated to donating large fractions of their income is a laudable thing in itself. On the other hand...
I don’t believe for one second that effective altruism would have been nearly as big of a phenomenon on LessWrong, if the owners of LessWrong hadn’t been living off people’s donations. MIRI is a charity that wants money. Giving to charity is probably the biggest moral credential on LW. Coincidence? I think not.
Ensuring the flow of money in a particular direction may not be the very best effort one can put into making the world a better place. Sure, it’s something, and at least in the short term a very vital something, but more than anything else it seems to be a way to patch up, or prop up, a part of the system that was shaky to begin with. The long-term end goal should be to make people less reliant on charity money. Sometimes there is a shortage of knowledge, or of power, or of good incentives, rather than of money. “Throwing money at a cause” is just one way to help—although I suppose effective altruist organizations already incorporate the knowledge of this problem in their concept of “room for more funding”.
We already have governments that take away a large portion of our incomes anyway, that have systems in place for allocating funds and efforts, and that purport to promote the same kinds of causes as charities, yet often function inefficiently and even harmfully. However, they’re a lot more reliable in terms of actually ensuring the collection of “enough” funds. To pay taxes and to give to charity (yes, I’m aware that charitable giving unlocks tax deductions) is to contribute to two systems that are doing the same job, the second being there mostly because the first isn’t doing its job as it should. In this way, and possibly assuming that EA would be a larger movement in the future than it is now, charity might work to mask government inefficiencies and damage or to clean up after them.
In the context of earning to give, participating in a particularly noxious industry as a way of earning your livelihood, and using part of that money to contribute to altruist causes, is something that looks to me like a tax on the well-being you thus cause into the world. I’m not sure that tax is always smaller than 100%. And it’s more difficult to quantify the negative externalities from your job than it is to quantify the positive effects of your donations, because the first are more causally distant.
To take the discussion back to the meta level, I’m but one user with not so much karma and probably a non-central example of a LessWronger, so I don’t demand that anyone accommodates me and my preferences not to discuss EA. However, knowing that other users basically come from an effective altruism mindset makes discussion with them somewhat difficult, since we don’t have the same assumptions about the relationship between money and welfare. The most annoying of all is the very rare and very occasional display of charitable snobbery, or a commitment not to aid first world people who are not effective altruists, or who don’t donate enough. (I’ve seen that, but Google seems to fail me at this moment.) It seems easier and more pleasant to discuss ethical matters with people who don’t come from an EA worldview, and personally I’d like to see more of a plurality of approaches on the matter on LW.
tl;dr It’s a rich people thing and therefore alien to me; as for objective merits, I’ve got mixed positive and negative feelings about it. But in the end, to each their own.
There seem to be two broad categories of discussion topics on LessWrong: topics that are directly and obviously rationality-related (which seems to me to be an ever-shrinking category), and topics that have come to be incidentally associated with LessWrong to the extent that its founders / first or highest-status members chose to use this website to promote them—artificial intelligence and MIRI’s mission along with it, effective altruism, transhumanism, cryonics, utilitarianism—especially in the form of implausible but difficult dilemmas in utilitarian ethics or game theory, start-up culture and libertarianism, polyamory, ideas originating from Overcoming Bias which, apparently, “is not about” overcoming bias, NRx (a minor if disturbing concern)… I could even say California itself, as a great place to live in.
As a person interested in rationality and little else that this website has to offer, I would like for there to be a way to filter out cognitive improvement discussions from these topics. Because unrelated and affiliated memes are given more importance here than related and unaffiliated memes, I have since begun to migrate to other websites for my daily dose of debiasing. Obviously it would be all varieties of rude of me to tell everybody else “stop talking about that stuff! Talk about this stuff instead… while I sit here in the audience and enjoy listening to you speaking”, and obviously the best thing I could do to further my purpose of seeing more rationality material on LessWrong would be to post* some high-quality rationality material—which I do plan on doing, but I still feel that my ideas have some maturing and polishing to undergo before they’re publishable. So what I intend to do with this post is to poll people for thoughts and opinions on this matter, and perhaps re-raise the old discussions about revamping the Main/Discussion division of LessWrong.
Also, for what it’s worth, it seems to me that most of the bad PR LessWrong gets comes from those topics that I’ve mentioned in the first paragraph being more visible to outsiders than the stated mission of “refining the art of human rationality”. People often can’t get beyond the peculiarities of Bayland to the actual insights that we value this community most for—and to be honest, if I hadn’t read the Sequences first and instead got hit in the face with persuasions to donate to charity or to believe in x-risk or to get my head frozen upon my first visit to LW, I’d have politely “No-Thank-You”ed the messengers like I do door-to-door salesmen. To outsiders not predisposed to be friendly to transhumanism & co. through their demographics, to conflate the two sides of LessWrong is to devalue the side that champions rationality. Unless, of course, that was the point all along and LessWrong has less intrinsic value for the founders than its purpose as an attractor of smart, concerned young people.
* notably SSC, RibbonFarm, TheLastPsychiatrist, and even highly biased but well-written blogs coming from the opposite side of the political spectrum—hopefully for our respective biases to cancel out and for me to be left with a more accurate worldview than I started out with. (I don’t read political material that I agree with, and to be honest it would be difficult to even come across texts prioritizing the same issues that I care about. I sometimes feel like I’m the first one of my political inclination...) I’m not necessarily endorsing any of these for anyone else (except Scott, read Scott, he’s amazing), it’s just that there is where I get my food for thought. They raise issues and put a new spin on things that don’t usually occur to me.
- 14 Jan 2015 23:22 UTC; 4 points) 's comment on What topics are appropriate for LessWrong? by (
My observation about cults, from personal experience leading them
* raises eyebrow *
How rare it is to encounter advice about the future which begins from a premise of incomplete knowledge!
─James C. Scott, Seeing Like a State
I don’t read the comments on SSC mostly because of very, very poor comment section layout (because nesting comments just significantly narrows the width for every “child” comment), not because of comment quality. Besides, if I perceived SSC comments as rather poor in quality, it would be mostly because of the significantly larger contrast between them and the main blog post. (At least here in Discussion, it’s rare when one LessWronger in the main post writes significantly better than the several top commenters, whereas on SSC, Scott is, well, Scott and everybody else is just everybody else.)
Besides, I don’t find the addition of a up/down vote feature an improvement, rather the opposite, especially when it comes to the kinds of topics Scott touches upon. I often don’t use my own up/down vote buttons because I often feel like “who the hell am I to judge?”, particularly when I know I lack expertise in something. Other people, also without qualifications, may not be as scrupulous. While it does clutter up the comment section, agreement or disagreement expressed verbally, non-anonymously (because it psychologically weighs differently depending on the impressiveness of the person it comes from), and with justifications seems to me to help the overall quality of discussion.
Wasn’t there a less passive-aggressive way of expressing this complaint, or a more appropriate context for it?
I remember having read a discussion about this in a recent open thread; there was one particular response which I liked. Give me a moment...
Later: Found! User sixes_and_sevens asked people to think of eight categories in which to divide LessWrong. Emile came up with the following list:
Self-improvement, optimal living, life hacks
Philosophy
Futurism (Cryonics, the singularity
Friendly AI and SIAI, I mean, MIRI
Maths, Decision Theory, Game theory
Meetups
General-interest discussion (biased towards the interests of atheist nerds)
Meta
(If you liked the suggestions, you could go upvote his post instead of mine.)
My own minor correction to the list would be to merge Futurism & FAI into one category, and perhaps do the same with Philosophy and Math, Decision Theory, Game Theory etc. (so as to have all the theoretical stuff in the same place—the Sequences, for example, could go here), but other than that I agree with it.
As for Main, perhaps we could implement reddit’s recent & all-time best submissions lists, sorted by karma, percentage of upvotes, or a combination of both. An entire subreddit devoted to posts worthy of promotion seems not only unnecessary as long as we got a karma system, but a potential source of drama. As it is right now, users who wish to make a post are basically asked whether they believe they’re about to make a great post or not, and their choice invites others to judge whether they were being appropriately humble. (I don’t think it’s a mystery for anybody why sometimes “moved to Discussion” mod posts get so many upvotes.) I think it’s rather excessive to invite these considerations into the picture; just wait and see how much karma the post gets and that’s it.
ETA: If you do split up LW in multiple subreddits, would that mean that all past posts would have to be re-categorized? Or would they be archived, so as to avoid the tedium that recategorization would involve, and start afresh with the new categories? Or perhaps you’re thinking of keeping some analogue of the Main and Discussion categories, and transfer all past posts to their respective categories, while the new (non-Main, non-Discussion) subreddits that would be created would start out empty? (Judging by the structures of both ideas proposed in the OP, I’m inclined to believe the third hypothesis.)
- 28 Sep 2013 3:46 UTC; -4 points) 's comment on Which subreddits should we create on Less Wrong? by (
Right. Stop. Just stop. I can see right through what you’re doing now.
It wasn’t a “perfectly reasonable hypothesis”, it was meant to reflect bad on me; it was an oblique accusation that I broke the social norm of not calling people stupid, or not arrogantly believing everybody who disagrees with me to be stupid. Of course I don’t believe that you, or anybody smart enough to be on LW, would ever give serious consideration to the hypothesis that they’re really, truly, honest-to-God dumb; no, you’re a bunch of reasonably smart guys that are aware that they’re smart. Of course that I chose the other interpretation of your words, the one that is in line with your interests in this discussion, the one that doesn’t conflict with the fact that people tend to maintain a flattering image of themselves, especially when facing people they disagree with, the one that is consistent with the kind of attitude you maintained towards me during this discussion—the one that assumes bad faith on your part. So no, you can’t just go around now and say that, oh, no, it was totally sincere and innocent.
As for the big question of the story—do I believe one has to be a dumbass like this acquaintance of mine to disagree with me on this? Of course not—predictably. I wasn’t surprised that they (the acquaintance) didn’t see it because, take my word for it, they just weren’t blessed with great intelligence. If, on the other hand, I see someone on here disagreeing with me on this, I explain it to myself this way: perhaps they misunderstood, or perhaps they reacted badly to one part of my post and consistency compelled them to react badly to the rest, or maybe even (but this is unlikely) I am missing something. But the hypothesis that I just ran into a complete idiot doesn’t cross my mind. And I’m writing this just so that I don’t have to explain myself again.
That was tiresome. Going through the intricacies of interpersonal affairs always is. Please, do me a favour and next time we talk, do your part on cutting the micropolitics to a minimum; the amount of noise that a non-neutral reply generates is ridiculous.
You should give more credit to the emotional part of your brain :) It’s not that stupid. There’s a little extra something in-between the pain and the person causing it, that triggers the reaction of hatred against the person—probably the expectation of hostile intentions. It’s likely not a simple two-item person+pain=hatred association arc; even our emotional selves know this.
Alright, but
1) Overly risky for whom? I personally don’t feel I have exposed myself to any risk other than vindictive downvoting, and if that happens (it hasn’t yet) I trust that the resulting karma issues won’t affect my participation much.
2) As far as I know LW doesn’t have a well set-up report & moderation system. I even searched for the official rules of conduct and only found a page on what people should not talk about (not on how they should behave generally, or the policy for bans and banned members). I don’t remember seeing a list of all the mods on LW or which of them is currently online. Even Viliam Bur said here that it might take even mods a long time to get to the bottom of such an issue.
3) Some people who have an interest in knowing this (e.g. people who have been the targets of mass downvoting) might have been duped otherwise, and I view this as an intrinsically bad thing.
Inaccurately polarized ideas about Thiel’s politics, general divisiveness and hostilities between LessWrongers, a fantastic opportunity for politically motivated trolls to come out, and spillover nasty rumours with regards to Thiel himself.
And ‘”so” concerned’ may be a bit pushing it.
My motley collection of thoughts upon reading this (please note that, wherever I say “you” or “your” in this post, I’m referring to the whole committee that is working on this ebook, not to you, lukeprog, in particular):
It’s a difficult book to name, chiefly because the sequences themselves don’t really have a narrow common thread; eliminating bias and making use of scientific advances don’t qualify as narrow enough, many others are trying to do that these days. (But then again, I didn’t read them in an orderly fashion, or enough times, to be able to identify the common thread if there is one more specific than that. If there is one, by all means, play on that.)
Absolutely no mention of anything such as The Less Wrong Sequences, 2006-2009. This belongs in a blurb or in an introduction to the book. You probably think that, by using that in a title, you’re telling readers the following: the contents of this book were originally published as sequences of blog posts on the website lesswrong.com, from 2006 to 2009. But you’re not. This information can be conveyed in a sentence such as that one, but it cannot be conveyed in a short title, given that readers are unfamiliar with the terms. There isn’t really a way for them to guess from a quick glance at the title that “Less Wrong” means “the website ”LessWrong.com″ or that “the Sequences” mean “several series of blog posts around which the LessWrong community was formed”, or what all of that has to do with them.
And even so—is that the first thing you wish to tell your readers? What happened to the contents of the book before they were made into a book...? And in a form which is basically incomprehensible to them? While giving little insight into the content itself? And do you really, honestly think that you’re not doing the material a disservice by telling the readers that it was first published on some guy’s blog, before they know anything else about the book (i.e. how it distinguishes itself from ordinary blog posts)? If the first association is with something as low-status as a blog, then that’s gonna be the lowest common denominator—you’re gonna have to work up from that, which is harder than working up from the expectation of an average pop-sci book. (Thankfully for you, though, the readers won’t be able to draw those inferences; see the paragraph above.)
The rest of the suggestions—The Craft of Rationality, The Art of Rationality, Becoming Less Wrong—they’re not technically bad, but… they’re—they’re weak. They’re not distinguishable. The authors out there that are trying to establish themselves as the masters of the “art/craft” of something are a dime a dozen. Sure, probably LWers are probably the most eager bunch to claim “the art of rationality” for themselves, or at least this is what a quick internet search told me, but the connection isn’t immediately established in the minds of the readers.
Careful about any unflattering allusions to the reader’s intelligence. They can be taken well if presented in a humorous/witty form, but you have to make believable promises that the book will help readers overcome them. Also (and this is directed mainly towards the rest of the commenters), everything that suggests that the book is meant to drill the “correct” ideas into your head, rather than teach you how to develop good thinking practices on your own, is a no-no.
How come Eliezer hasn’t come up with a good, catchy title yet? I’ve just gone over the titles of the blog posts included in the sequences, and those ones are very good, very appropriate as chapter/subchapter titles. He’s good at this titling business. Surely he could think up something witty for the one title to rule them all?
No suggestions from me just yet. I need to think this through better.