You’re focusing on negative reinforcement for bad comments. What we need is positive reinforcement for good comments. Because there are so many ways for a comment to be bad, discouraging any given type of bad comment will do effectively nothing to encourage good comments.
“Don’t write bad posts/comments” is not what we want. “Write good posts/comments” is what we want, and confusing the two means nothing will get done.
We need to discourage comments that are not-good. Not just plainly bad. Only… not adding value, but still taking time to read.
The time lost per one comment is trivial, but the time lost by reading thousand comments isn’t. How long does it take LW to produce thousand comments? A few days at most.
This article alone has about 100 comments. Did you get 100 insights from reading them?
Why do the first three questions have four variations on the theme of “new users are likely to erode the culture” and nothing intermediate between that and “there is definitely no problem at all”?
Why ask for the “best solution” rather than asking “which of these do you think are good ideas”?
“Assuming user is of right type/attitude, too many users for acculturation capacity.”
Imagine this: There are currently 13,000 LessWrong users (well more since that figure was for a few months ago and there’s been a Summit since then) and about 1,000 are active. Imagine LesWrong gets Slashdotted—some big publication does an article on us, and instead of portraying LessWrong as “Cold and Calculating” or something similar to Wired’s wording describing the futurology Reddit where SingInst had posted about AI “A sub-reddit dedicated to preventing Skynet” they actually say something good like “LessWrong solves X Problem”. Not infeasible since some of us do a lot of research and test our ideas.
Say so many new users join in the space of a month and there are now twice as many new active users as older active users.
This means 2⁄3 of LessWrong is clueless, posting annoying threads, and acting like newbies. Suddenly, it’s not possible to have intelligent conversation about the topics you enjoy on LessWrong anymore without two people throwing strawman arguments at you and a third saying things that show obvious ignorance of the subject. You’re getting downvoted for saying things that make sense, because new users don’t get it, and the old users can’t compensate for that with upvotes because there aren’t enough of them.
THAT is the type of scenario the question is asking about.
I worded it as “too many new users for acculturation capacity” because I don’t think new users are a bad thing. What I think is bad is when there are an overwhelming number of them such that the old users become alienated or find it impossible to have normal discussions on the forum.
Please do not confuse “too many new users for acculturation capacity” with “new users are a bad thing”.
Why do the first three questions have four variations on the theme of “new users are likely to erode the culture” and nothing intermediate between that and “there is definitely no problem at all”?
Why do you not see the “eroded the culture” options as intermediate options? The way I see it is there are three sections of answers that suggest a different level of concern:
There’s a problem.
There’s some cultural erosion but it’s not a problem (Otherwise you’d pick #1.)
There’s not a problem.
What intermediate options would you suggest?
Why ask for the “best solution” rather than asking “which of these do you think are good ideas”?
A. Because the poll code does not make check boxes where you select more than one. It makes radio buttons where you can select only one.
B. I don’t have infinite time to code every single idea.
If more solutions are needed, we can do another vote and add the best one from that (assuming I have time). One thing at a time.
Well, I did not imagine all the possibilities for what concerns you guys would have in order to choose verbiage sufficiently vague enough that those options would work as perfect catch-alls, but I did as for “other causes” in the comments, and I’m interested to see the concerns that people are adding like “EY stopped posting” and “We don’t have enough good posters” which aren’t about cultural erosion, but about a lapse in the stream of good content.
If you have concerns about the future of LessWrong not addressed so far in this discussion, please feel free to add them to the comments, however unrelated they are to the words used in my poll.
I have no particular opinion on what exactly should be in the poll (and it’s probably too late now to change it without making the results less meaningful than they’d be without the change). But the sort of thing that’s conspicuously missing might be expressed thus: “It’s possible that a huge influx of new users might make things worse in these ways, or that it’s already doing so, and I’m certainly not prepared to state flatly that neither is the case, but I also don’t see any grounds for calling it likely or for getting very worried about it at this point.”
The poll doesn’t have any answers that fit into your category 2. There’s “very concerned” and “somewhat concerned”, both of which I’d put into category 1, and then there’s “not at all”.
Check boxes: Oh, OK. I’d thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told “Polls must have at least two choices”. If anyone with the power to change the code is reading this, I’d like to suggest that removing this check would both simplify the code and make the system more useful. An obvious alternative would be to add checkbox polls, but that seems like it would be more work.
[EDITED to add: Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn’t by me.]
[EDITED again to add: I see I got downvoted too. I’d be grateful if someone who thinks this comment is unhelpful could explain why; even after rereading it, it still looks OK to me.]
Yes. I asked because my mind drew a blank on intermediate options between some problem and none. I interpreted some problem as being intermediate between problem and no problem.
“It’s possible that a huge influx of new users might make things worse in these ways, or that it’s already doing so, and I’m certainly not prepared to state flatly that neither is the case, but I also don’t see any grounds for calling it likely or for getting very worried about it at this point.”
Ok, so your suggested option would be (to make sure I understand) something like “I’m not convinced either way that there’s a problem or that there’s no problem).
Maybe what you wanted was more of a “What probability of a problem is there?” not “Is there a problem or not, is it severe or mild?”
Don’t know how I would have combined probability, severity and urgency into the same question, but that would have been cool.
I’d thought there was a workaround by making a series of single-option multiple-choice polls
I considered that (before knowing about the two options requirement) but (in addition to the other two concerns) that would make the poll really long and full of repetition and I was trying to be as concise as possible because my instinct is to be verbose but I realize I’m doing a meta thread and that’s not really appreciated on meta threads.
Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn’t by me.
Oh, OK. I’d thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told “Polls must have at least two choices”.
It sounds like you could still work around it by making several yes/no agreement polls, although this would be clunky enough that I’d only recommend it for small question sets.
The reason I selected it for the poll is because they are talking about creating online training materials. It would be more effective to send someone to something online from a website than to send them somewhere IRL from a website as only half of us are in the same country.
Just as it didn’t occur to her that the organization could have changed its name, it didn’t occur to me that she could seriously think there were two of them.
I thought there were two centers for rationality, one being the “Center for Modern Rationality” and the other being the “Center for Applied Rationality”. Adding a link to one of them didn’t rule out the possibility of there being a second one.
So, you assigned a higher probability to there being two organizations from the same people on the same subject at around the same time with extremely similar names and my correction being mistaken in spite of my immersion in the community in real life… than to you having out-of-date information about the organization’s name?
The possibility that the organization had changed it’s name did not occur to me. I wish you would have just said “It changed it’s name.”
As for why I did not assume you knew better than me: The fact that the article was right there talking about the “Center for Modern Rationality” contradicted your information.
I have never met an infallible person, so in the event that I have information that contradicts yours, I will probably think that you’re wrong.
It’s nice when all the possibilities for why my information contradicts others occurs to me so that I can do something like go search for whether the name of an organization was changed, but that doesn’t always happen.
If you knew that it used to be called “Center for Modern Rationality” and changed it’s name to “Center for Applied Rationality” why did you not say “It changed it’s name.”?
I’ve noticed a pattern with you: Your responses are often missing some contextual information such that I respond in a way that contradicts you. I think you would find me less frustrating if you provided more context.
I think you would find me less frustrating if you provided more context.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct
Agreed. That’s easier. However, sometimes the easier way is not the correct way.
I wish I could trust other’s information. I have wished that my entire life. It is frequently exhausting and damn hard to question this much of what people say. But I want to be correct, not merely pleasant, and that’s life.
Eliezer intended for us to question authority. I’d have done it anyway because I started doing that ages ago. But he said in no uncertain terms that this is what he wants:
In Two More Things to Unlearn from School he warns his readers that “It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism.”
In Cached Thoughts he tells you to question what HE says. “Now that you’ve read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!”
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
I’m not saying that a hypothetical vague “you” shouldn’t question things. I’m saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
I’m not saying that a hypothetical vague “you” shouldn’t question things.
Neither was I:
what reason do I have to believe that any authority figure or expert or established user is more likely to be correct?
I’m saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
So, trust you guys more while I’m still trying to figure out how much to trust you? Not going to happen, sorry.
Perhaps the perception you’re having is caused by the fact that you did not know how cynical I was when I started. My trust has increased quite a bit. If I appear not to trust Alicorn very much, this is because I’ve seen what appears to be an unusually high number of mistakes. I realize that this may be due to biased sample (I haven’t read thousands of Alicorn’s posts, maybe a dozen or so). But I’m not going to update with information I don’t have, and I don’t see it as a good use of time to go reading lots and lots of posts by Alicorn and whoever else trying to figure out how much to trust them. I will have a realistic idea of her eventually.
You might think about the reasons people have for saying the things they say. Why do people make false statements? The most common reasons probably fall under intentional deception (“lying”), indifference toward telling the truth (“bullshitting”), having been deceived by another, motivated cognition, confabulation, or mistake. As you’ve noticed, scientists and educators can face situations where complete integrity and honesty comes into conflict with their own career objectives, but there’s no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There’s also no apparent motivation for Alicorn to bullshit or confabulate; if she isn’t quite sure she remembers the name, she doesn’t have anything to lose by simply moving on without commenting, nor does she have much to gain by getting away with posting the wrong name. That leaves the possibility that she has the wrong name by an unintended mistake. But different people’s chances of making a mistake are not necessarily equal. By being more directly involved with the organization, Alicorn has had many more opportunities to be corrected about the name than you have. That makes it much more likely that you are the one making the mistake, as turned out to be the case.
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
You could phrase your questions as questions rather than statements. You could also take extra care to confirm your facts before you preface a statement with “no, actually”.
there’s no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There’s also no apparent motivation for Alicorn to bullshit or confabulate
I know. But it’s possible for her to be unaware of the existence of CFMR, had there been two orgs. If you read the entire disagreement, you’ll notice that what it came down to is that it did not occur to me that CFMR might have changed it’s name. Therefore, denial that it existed appeared to be in direct conflict with the evidence. The evidence being two articles where people were creating CFMR.
Alicorn has had many more opportunities to be corrected about the name than you have.
I was surprised she didn’t seem to know about it, but then again, if she doesn’t read every single post on here, it’s possible she didn’t know. I don’t know how much she knows, or who she specifically talks to, or how often she talks to them, or whether she might have been out sick for a month or what might have happened. For something that small, I am not going to go to great lengths to analyze her every potential motive for being correct or incorrect. My assessment was simple for that reason.
As for wanting to trust people more, I’ve been thinking about ways to go about that, but I doubt I will do it by trying to rule out every possible reason for them to have been wrong. That’s a long list, and it’s dependent upon my imperfect ability to think of all the reasons that a person might be wrong. I’m more likely to go about it from a totally different angle: How many scientists are there? What things do most of them agree on? How many of those have been proven false? Okay, that’s an estimated X percent chance that what most scientists believe is actually true based on sample set of (whatever) size.
You could phrase your questions as questions rather than statements.
This is a good suggestion, and I normally do.
You could also take extra care to confirm your facts before you preface a statement with “no, actually”.
I did confirm my fact with two articles. That is why it became a “no actually” instead of a question.
This seems like a risky heuristic to apply generally, given the volume of domain-specific contrarianism floating around here. My own version is more along the lines of “trust, but verify”.
It’s a specific problem Epiphany has that she assumes her own internal monologue of what’s true is far more reliable than any evidence or statements to the contrary.
That’s not a problem unless it’s false. Almost all evidence and statements to the contrary are less reliable than my belief regarding what’s true.
That’s a very expensive state to maintain, since I got that way by altering my internal description of what’s true to match the most reliable evidence that I can find...
I don’t think I am right about everything, but I relate to this. I am not perfectly rational. But I decided to tear apart and challenge all my cached thoughts around half my life ago (well over a decade before Eliezer wrote about cached thoughts of course, but it’s a convenient term for me now) and ever since then, I have not been able to see authorities the same way...
I think it would be ideal if we were all to strive to do enough hard work that we’ve successfully altered our internal description of what’s true to match the most reliable evidence on so many different topics as to be able to see fatal flaws in the authoritative views more often than not.
Considering the implications of the first three links in this post that accomplishment may not be an unrealistic one and sadly, I don’t say this because I think we’re all so incredibly smart, but because the world is so incredibly broken.
I’ve never accepted that belief in the authority on any subject could pay rent. The biggest advantage experts have to me is when they can quickly point me to the evidence that I can evaluate fastest to arrive at the correct conclusion; rather than trust Aristotle that heavier items fall faster, I can duplicate any number of experiments that show that any two objects with equal specific air resistance fall at exactly the same speed.
Downside: It is more expensive to evaluate the merits of the evidence than the credentials of the expert.
The biggest advantage experts have to me is when they can quickly point me to the evidence that I can evaluate fastest to arrive at the correct conclusion
I relate to this.
Downside: It is more expensive to evaluate the merits of the evidence than the credentials of the expert.
There simply isn’t enough time to evaluate everything. When it’s really important, I’ll go to a significant amount of trouble. If not, I use heuristics like “how likely is it that something as easy to test as this made it’s way into the school curriculum and is also wrong?” if I have too little time or the subject is of little importance, I may decide the authoritative opinion is more likely to be right than my absolutely not thought out at all opinion, but that’s not the same as trusting authority. That’s more like slapping duct tape on, to me.
Slightly wrong heuristic. Go with “What proportion of things in the curriculum that are this easy to test have been wrong when tested?”
The answer is disturbing. Things like ‘Glass is a slow-flowing liquid’.
Actually ‘Glass is a slow-flowing liquid’ would take decades to test, wouldn’t it? I think you took a different meaning of “easy to test”. I meant something along the lines of “A thing that just about anyone can do in a matter of minutes without spending much money.”
Unless you can think of a fast way to test the glass is a liquid theory?
Unless you can think of a fast way to test the glass is a liquid theory?
Look at old windows that have been in for decades. Do they pile up on the bottom like caramel? No. Myth busted.
More interesting than simple refutation though is “taboo liquid”. Go look at non-newtonian fluids and see all the cool things that matter can do. For example, Ice and rock flow like a liquid on a large enough scale (glaciers, planetary mantle convection).
Look at old windows that have been in for decades. Do they pile up on the bottom like caramel? No. Myth busted.
I actually believed that myth for ages because the panes in my childhood house were thicker on the bottom than on the top, causing visible distortion. Turns out that making perfectly flat sheets of glass was difficult at the time it was built, and that for whatever reason they’d been put in thick side down.
Oh. Yeah. Good point. Obviously I wasn’t thinking too hard about this. Thank you.
Wait, so they put the glass is a liquid theory into school curriculum and it was this easy to test?
I don’t recall that in my own school curriculum. I’ll be thinking about whether to reduce my trust for my own schooling experience. It can’t go much further down after reading John Taylor Gatto, but if the remaining trust that is there is unfounded, I might as well kill it, too.
This is the first one that comes to mind. I might post others as I find them, but to be honest I’m too lazy to go through your logs or my IRC logs to find the examples
That is an example of me not being aware of how others use a word, not an example of me believing I am correct when others disagree with me and then being wrong. In fact, I think that LessWrong and I agree for the most part on that subject. We’re just using the word elitism differently.
Do you have even a single example of me continuing to think I am correct about something where a matter of truth (not wording) is concerned even after compelling evidence to the contrary is presented?
I think I would find you less frustrating if I stopped trying to interact with you in the first place. Please remind me that I said this if I ever try again.
I had a couple of ideas like this myself and I chose to cull them before doing this poll for these reasons:
The problem with splitting the discussions is that then we’d end up with people having the same discussions in multiple different places. The different posts would not have all the information, so you’d have to read several times as much in if you wanted to get it all. That would reduce the efficiency of the LessWrong discussions to a point where most would probably find it maddening and unacceptable.
We could demand that users stick to a limited number of subjects within their subdivision, but then discussion would be so limited that user experience would not resemble participation in a subculture. Or, more likely, it just wouldn’t be enforced thoroughly enough to stop people from talking about what they want, and the dreaded plethora of duplicated discussions would still result.
The best alternative to this as far as I’m aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
The best alternative to this as far as I’m aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
That seems like an inefficient use of CFAR training (and so an inefficient use of whatever resources that would have to be used to pay CFAR for such training). I’d prefer to just cull those disruptively bad at rational thinking entirely. Some people just cannot be saved (in a way that gives an acceptable cost/benefit ratio). I’d prefer to save whatever attention or resources I was willing to allocate to people-improvement for those that already show clear signs of having thinking potential.
I am among those absolutely hardest to save, having an actual mental illness. Yet this place is the only thing saving me from utter oblivion and madness. Here is where I have met my only real friends ever. Here is the only thing that gives me any sense of meaning, reason to survive, or glimmer of hope. I care fanatically about it.
Many of the rules that have been proposed. Or for that matter even the amount of degradation that has ALREADY occurred… If that had been the case a few years ago, I wouldn’t exist, this body would either be rotting in the ground, or literally occupied by an inhuman monster bent on the destruction of all living things.
I’m fascinated. (I’m a psychology enthusiast who refuses to get a psychology degree because I find many of the flaws with the psychology industry unacceptable). I am very interested in knowing how LessWrong has been saving you from utter oblivion and madness. Would you mind explaining it? Would it be alright with you if I ask you which mental illness?
Would you please also describe the degradation that has occurred at LW?
I’d rather not talk about it in detail, but it boils down to LW in general promoting sanity and connects smart people in general. That extra sanity can be used to cancel out insanity, not just creating super-sanes.
Degradation: Lowered frequency of insightful and useful content, increased frequency of low quality content.
I have to admit I am not sure whether to be more persuaded by you or Armok. I suppose what it would come down to is a cost/benefit calculation that takes into account the amount of destruction saved by the worst as well as the amount of benefit produced by the best. Brilliant people can have quite an impact indeed, but they are rare and it is easier to destroy than to create, so it is not readily apparent to me which group it would be more beneficial to focus on, or if both, in what amount.
Practically speaking, though, CFAR has stated that they have plans to make web apps to help with rationality training and training materials for high schoolers. It seems to me that they have an interest in targeting the mainstream, not just the best thinkers.
I’m glad that someone is doing this, but I also have to wonder if that will mean more forum referrals to LW from the mainstream...
If you’re suggesting that duplicated discussions can be solved with paste, then you are also suggesting that we not make separate areas.
Think about it.
I suppose you might be suggesting that we copy the OP and not the comments. Often the comments have more content than the OP, and often that content is useful, informative and relevant. So, in the comments we’d then have duplicated information that varied between the two OP copies.
So, we could copy the comments over to the other area… but then they’re not separate...
Not seeing how this is a solution. If you have some different clever way to apply Ctrl+C, Ctrl+V then please let me know.
I assign non-neglible probability to some cause that I not am not specifically aware of (sorta, but not exactly an outside context problem) having a negative impact on LW’s culture.
Endless September Poll:
I condensed the feedback I got in the last few threads into a summary of pros and cons of each solution idea if you would like to something for reference.
How concerned should we be about LessWrong’s culture being impacted by:
...overwhelming user influx? [pollid:366]
...trending toward the mean? [pollid:367]
...some other cause? [pollid:368]
(Please explain the other causes in the comments.)
Which is the best solution for:
...overwhelming user influx?
(Assuming user is of right type/attitude, too many users for acculturation capacity.)
[pollid:369]
...trending toward the mean?
(Assuming user is of wrong type/attitude, regardless of acculturation capacity.)
[pollid:370]
...other cause of cultural collapse?
[pollid:371]
Note: Ideas that involve splitting the registered users into multiple forums were not included for the reasons explained here.
Note: “The Center for Modern Rationality” was renamed to “The Center for Applied Rationality”.
You’re focusing on negative reinforcement for bad comments. What we need is positive reinforcement for good comments. Because there are so many ways for a comment to be bad, discouraging any given type of bad comment will do effectively nothing to encourage good comments.
“Don’t write bad posts/comments” is not what we want. “Write good posts/comments” is what we want, and confusing the two means nothing will get done.
We need to discourage comments that are not-good. Not just plainly bad. Only… not adding value, but still taking time to read.
The time lost per one comment is trivial, but the time lost by reading thousand comments isn’t. How long does it take LW to produce thousand comments? A few days at most.
This article alone has about 100 comments. Did you get 100 insights from reading them?
That’s a good observation but for the record, the solution ideas were created by the group, not just me.
If you want to see more positive reinforcement suggestions being considered, why not share a few of yours?
Why do the first three questions have four variations on the theme of “new users are likely to erode the culture” and nothing intermediate between that and “there is definitely no problem at all”?
Why ask for the “best solution” rather than asking “which of these do you think are good ideas”?
Also, why is there no option for “new users are a good thing?”
Maybe a diversity of viewpoints might be a good thing? How can you raise the sanity waterline by only talking to yourself?
The question is asking you:
“Assuming user is of right type/attitude, too many users for acculturation capacity.”
Imagine this: There are currently 13,000 LessWrong users (well more since that figure was for a few months ago and there’s been a Summit since then) and about 1,000 are active. Imagine LesWrong gets Slashdotted—some big publication does an article on us, and instead of portraying LessWrong as “Cold and Calculating” or something similar to Wired’s wording describing the futurology Reddit where SingInst had posted about AI “A sub-reddit dedicated to preventing Skynet” they actually say something good like “LessWrong solves X Problem”. Not infeasible since some of us do a lot of research and test our ideas.
Say so many new users join in the space of a month and there are now twice as many new active users as older active users.
This means 2⁄3 of LessWrong is clueless, posting annoying threads, and acting like newbies. Suddenly, it’s not possible to have intelligent conversation about the topics you enjoy on LessWrong anymore without two people throwing strawman arguments at you and a third saying things that show obvious ignorance of the subject. You’re getting downvoted for saying things that make sense, because new users don’t get it, and the old users can’t compensate for that with upvotes because there aren’t enough of them.
THAT is the type of scenario the question is asking about.
I worded it as “too many new users for acculturation capacity” because I don’t think new users are a bad thing. What I think is bad is when there are an overwhelming number of them such that the old users become alienated or find it impossible to have normal discussions on the forum.
Please do not confuse “too many new users for acculturation capacity” with “new users are a bad thing”.
Why do you not see the “eroded the culture” options as intermediate options? The way I see it is there are three sections of answers that suggest a different level of concern:
There’s a problem.
There’s some cultural erosion but it’s not a problem (Otherwise you’d pick #1.)
There’s not a problem.
What intermediate options would you suggest?
A. Because the poll code does not make check boxes where you select more than one. It makes radio buttons where you can select only one.
B. I don’t have infinite time to code every single idea.
If more solutions are needed, we can do another vote and add the best one from that (assuming I have time). One thing at a time.
The option I wanted to see but didn’t was something along the lines of “somewhat, but not because of cultural erosion”.
Well, I did not imagine all the possibilities for what concerns you guys would have in order to choose verbiage sufficiently vague enough that those options would work as perfect catch-alls, but I did as for “other causes” in the comments, and I’m interested to see the concerns that people are adding like “EY stopped posting” and “We don’t have enough good posters” which aren’t about cultural erosion, but about a lapse in the stream of good content.
If you have concerns about the future of LessWrong not addressed so far in this discussion, please feel free to add them to the comments, however unrelated they are to the words used in my poll.
I have no particular opinion on what exactly should be in the poll (and it’s probably too late now to change it without making the results less meaningful than they’d be without the change). But the sort of thing that’s conspicuously missing might be expressed thus: “It’s possible that a huge influx of new users might make things worse in these ways, or that it’s already doing so, and I’m certainly not prepared to state flatly that neither is the case, but I also don’t see any grounds for calling it likely or for getting very worried about it at this point.”
The poll doesn’t have any answers that fit into your category 2. There’s “very concerned” and “somewhat concerned”, both of which I’d put into category 1, and then there’s “not at all”.
Check boxes: Oh, OK. I’d thought there was a workaround by making a series of single-option multiple-choice polls, but it turns out that when you try to do that you get told “Polls must have at least two choices”. If anyone with the power to change the code is reading this, I’d like to suggest that removing this check would both simplify the code and make the system more useful. An obvious alternative would be to add checkbox polls, but that seems like it would be more work.
[EDITED to add: Epiphany, I see you got downvoted. For the avoidance of doubt, it wasn’t by me.]
[EDITED again to add: I see I got downvoted too. I’d be grateful if someone who thinks this comment is unhelpful could explain why; even after rereading it, it still looks OK to me.]
Yes. I asked because my mind drew a blank on intermediate options between some problem and none. I interpreted some problem as being intermediate between problem and no problem.
Ok, so your suggested option would be (to make sure I understand) something like “I’m not convinced either way that there’s a problem or that there’s no problem).
Maybe what you wanted was more of a “What probability of a problem is there?” not “Is there a problem or not, is it severe or mild?”
Don’t know how I would have combined probability, severity and urgency into the same question, but that would have been cool.
I considered that (before knowing about the two options requirement) but (in addition to the other two concerns) that would make the poll really long and full of repetition and I was trying to be as concise as possible because my instinct is to be verbose but I realize I’m doing a meta thread and that’s not really appreciated on meta threads.
Oh, thank you. (:
It sounds like you could still work around it by making several yes/no agreement polls, although this would be clunky enough that I’d only recommend it for small question sets.
It’s the Center for Applied Rationality, not Modern Rationality.
No, actually, there is a “Center for Modern Rationality” which Eliezer started this year:
http://lesswrong.com/lw/bpi/center_for_modern_rationality_currently_hiring/
Here is where they selected the name:
http://lesswrong.com/lw/9lx/help_name_suggestions_needed_for_rationalityinst/5wb8
The reason I selected it for the poll is because they are talking about creating online training materials. It would be more effective to send someone to something online from a website than to send them somewhere IRL from a website as only half of us are in the same country.
No. You’re wrong. They changed it, which you would know if you clicked my link.
I don’t see how clicking the link you posted would have actually demonstrated her wrong.
Just as it didn’t occur to her that the organization could have changed its name, it didn’t occur to me that she could seriously think there were two of them.
We have both acknowledged our oversights now. Thank you.
I thought there were two centers for rationality, one being the “Center for Modern Rationality” and the other being the “Center for Applied Rationality”. Adding a link to one of them didn’t rule out the possibility of there being a second one.
So, you assigned a higher probability to there being two organizations from the same people on the same subject at around the same time with extremely similar names and my correction being mistaken in spite of my immersion in the community in real life… than to you having out-of-date information about the organization’s name?
The possibility that the organization had changed it’s name did not occur to me. I wish you would have just said “It changed it’s name.”
As for why I did not assume you knew better than me: The fact that the article was right there talking about the “Center for Modern Rationality” contradicted your information.
I have never met an infallible person, so in the event that I have information that contradicts yours, I will probably think that you’re wrong.
It’s nice when all the possibilities for why my information contradicts others occurs to me so that I can do something like go search for whether the name of an organization was changed, but that doesn’t always happen.
If you knew that it used to be called “Center for Modern Rationality” and changed it’s name to “Center for Applied Rationality” why did you not say “It changed it’s name.”?
I’ve noticed a pattern with you: Your responses are often missing some contextual information such that I respond in a way that contradicts you. I think you would find me less frustrating if you provided more context.
I think LessWrong as a whole would find you less frustrating if you assumed most comments from established users on domain-specific concepts or facts were more likely to be correct than your own thoughts and updated accordingly.
Established users can be wrong about many things, including domain-specific concepts or facts.
A more general heuristic that I do endorse, from Cromwell:
Agreed. That’s easier. However, sometimes the easier way is not the correct way.
In a world where the authoritative “facts” can be wrong more often than they’re right, scientists often take a roughly superstitious approach to science and the educational system isn’t even optimized for the purpose of educating what reason do I have to believe that any authority figure or expert or established user is more likely to be correct?
I wish I could trust other’s information. I have wished that my entire life. It is frequently exhausting and damn hard to question this much of what people say. But I want to be correct, not merely pleasant, and that’s life.
Eliezer intended for us to question authority. I’d have done it anyway because I started doing that ages ago. But he said in no uncertain terms that this is what he wants:
In Two More Things to Unlearn from School he warns his readers that “It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism.”
In Cached Thoughts he tells you to question what HE says. “Now that you’ve read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!”
Perhaps there is a way to be more pleasant while still questioning everything. If you can think of something, I will consider it.
I’m not saying that a hypothetical vague “you” shouldn’t question things. I’m saying that you specifically, User: Epiphany, seem to not be very well-calibrated in this respect and should update towards questioning things less until you have a better feel for LessWrong discussion norms and epistemic standards.
Neither was I:
So, trust you guys more while I’m still trying to figure out how much to trust you? Not going to happen, sorry.
So you’re trying to figure out how much to trust “us,” but you’re only willing to update in the negative direction?
Perhaps the perception you’re having is caused by the fact that you did not know how cynical I was when I started. My trust has increased quite a bit. If I appear not to trust Alicorn very much, this is because I’ve seen what appears to be an unusually high number of mistakes. I realize that this may be due to biased sample (I haven’t read thousands of Alicorn’s posts, maybe a dozen or so). But I’m not going to update with information I don’t have, and I don’t see it as a good use of time to go reading lots and lots of posts by Alicorn and whoever else trying to figure out how much to trust them. I will have a realistic idea of her eventually.
You might think about the reasons people have for saying the things they say. Why do people make false statements? The most common reasons probably fall under intentional deception (“lying”), indifference toward telling the truth (“bullshitting”), having been deceived by another, motivated cognition, confabulation, or mistake. As you’ve noticed, scientists and educators can face situations where complete integrity and honesty comes into conflict with their own career objectives, but there’s no apparent incentive for anyone to distort the truth about the name of the Center for Applied Rationality. There’s also no apparent motivation for Alicorn to bullshit or confabulate; if she isn’t quite sure she remembers the name, she doesn’t have anything to lose by simply moving on without commenting, nor does she have much to gain by getting away with posting the wrong name. That leaves the possibility that she has the wrong name by an unintended mistake. But different people’s chances of making a mistake are not necessarily equal. By being more directly involved with the organization, Alicorn has had many more opportunities to be corrected about the name than you have. That makes it much more likely that you are the one making the mistake, as turned out to be the case.
You could phrase your questions as questions rather than statements. You could also take extra care to confirm your facts before you preface a statement with “no, actually”.
I know. But it’s possible for her to be unaware of the existence of CFMR, had there been two orgs. If you read the entire disagreement, you’ll notice that what it came down to is that it did not occur to me that CFMR might have changed it’s name. Therefore, denial that it existed appeared to be in direct conflict with the evidence. The evidence being two articles where people were creating CFMR.
I was surprised she didn’t seem to know about it, but then again, if she doesn’t read every single post on here, it’s possible she didn’t know. I don’t know how much she knows, or who she specifically talks to, or how often she talks to them, or whether she might have been out sick for a month or what might have happened. For something that small, I am not going to go to great lengths to analyze her every potential motive for being correct or incorrect. My assessment was simple for that reason.
As for wanting to trust people more, I’ve been thinking about ways to go about that, but I doubt I will do it by trying to rule out every possible reason for them to have been wrong. That’s a long list, and it’s dependent upon my imperfect ability to think of all the reasons that a person might be wrong. I’m more likely to go about it from a totally different angle: How many scientists are there? What things do most of them agree on? How many of those have been proven false? Okay, that’s an estimated X percent chance that what most scientists believe is actually true based on sample set of (whatever) size.
This is a good suggestion, and I normally do.
I did confirm my fact with two articles. That is why it became a “no actually” instead of a question.
I do read every single post on here. (Well, I skim technical ones.)
This seems like a risky heuristic to apply generally, given the volume of domain-specific contrarianism floating around here. My own version is more along the lines of “trust, but verify”.
It’s a specific problem Epiphany has that she assumes her own internal monologue of what’s true is far more reliable than any evidence or statements to the contrary.
That’s not a problem unless it’s false. Almost all evidence and statements to the contrary are less reliable than my belief regarding what’s true.
That’s a very expensive state to maintain, since I got that way by altering my internal description of what’s true to match the most reliable evidence that I can find...
I don’t think I am right about everything, but I relate to this. I am not perfectly rational. But I decided to tear apart and challenge all my cached thoughts around half my life ago (well over a decade before Eliezer wrote about cached thoughts of course, but it’s a convenient term for me now) and ever since then, I have not been able to see authorities the same way...
I think it would be ideal if we were all to strive to do enough hard work that we’ve successfully altered our internal description of what’s true to match the most reliable evidence on so many different topics as to be able to see fatal flaws in the authoritative views more often than not.
Considering the implications of the first three links in this post that accomplishment may not be an unrealistic one and sadly, I don’t say this because I think we’re all so incredibly smart, but because the world is so incredibly broken.
Did you start questioning early as well?
I’ve never accepted that belief in the authority on any subject could pay rent. The biggest advantage experts have to me is when they can quickly point me to the evidence that I can evaluate fastest to arrive at the correct conclusion; rather than trust Aristotle that heavier items fall faster, I can duplicate any number of experiments that show that any two objects with equal specific air resistance fall at exactly the same speed.
Downside: It is more expensive to evaluate the merits of the evidence than the credentials of the expert.
I relate to this.
There simply isn’t enough time to evaluate everything. When it’s really important, I’ll go to a significant amount of trouble. If not, I use heuristics like “how likely is it that something as easy to test as this made it’s way into the school curriculum and is also wrong?” if I have too little time or the subject is of little importance, I may decide the authoritative opinion is more likely to be right than my absolutely not thought out at all opinion, but that’s not the same as trusting authority. That’s more like slapping duct tape on, to me.
Slightly wrong heuristic. Go with “What proportion of things in the curriculum that are this easy to test have been wrong when tested?” The answer is disturbing. Things like ‘Glass is a slow-flowing liquid’.
Actually ‘Glass is a slow-flowing liquid’ would take decades to test, wouldn’t it? I think you took a different meaning of “easy to test”. I meant something along the lines of “A thing that just about anyone can do in a matter of minutes without spending much money.”
Unless you can think of a fast way to test the glass is a liquid theory?
Look at old windows that have been in for decades. Do they pile up on the bottom like caramel? No. Myth busted.
More interesting than simple refutation though is “taboo liquid”. Go look at non-newtonian fluids and see all the cool things that matter can do. For example, Ice and rock flow like a liquid on a large enough scale (glaciers, planetary mantle convection).
I actually believed that myth for ages because the panes in my childhood house were thicker on the bottom than on the top, causing visible distortion. Turns out that making perfectly flat sheets of glass was difficult at the time it was built, and that for whatever reason they’d been put in thick side down.
Oh. Yeah. Good point. Obviously I wasn’t thinking too hard about this. Thank you.
Wait, so they put the glass is a liquid theory into school curriculum and it was this easy to test?
I don’t recall that in my own school curriculum. I’ll be thinking about whether to reduce my trust for my own schooling experience. It can’t go much further down after reading John Taylor Gatto, but if the remaining trust that is there is unfounded, I might as well kill it, too.
You can’t taboo a word used in the premise.
Non-Newtonian fluids aren’t liquids, except when they are.
Granted, they are pretty cool though.
Give three examples.
http://lesswrong.com/lw/efv/elitism_isnt_necessary_for_refining_rationality/
This is the first one that comes to mind. I might post others as I find them, but to be honest I’m too lazy to go through your logs or my IRC logs to find the examples
That is an example of me not being aware of how others use a word, not an example of me believing I am correct when others disagree with me and then being wrong. In fact, I think that LessWrong and I agree for the most part on that subject. We’re just using the word elitism differently.
Do you have even a single example of me continuing to think I am correct about something where a matter of truth (not wording) is concerned even after compelling evidence to the contrary is presented?
I think I would find you less frustrating if I stopped trying to interact with you in the first place. Please remind me that I said this if I ever try again.
Proposed solution: add lots of subdivisions with different requirements.
I had a couple of ideas like this myself and I chose to cull them before doing this poll for these reasons:
The problem with splitting the discussions is that then we’d end up with people having the same discussions in multiple different places. The different posts would not have all the information, so you’d have to read several times as much in if you wanted to get it all. That would reduce the efficiency of the LessWrong discussions to a point where most would probably find it maddening and unacceptable.
We could demand that users stick to a limited number of subjects within their subdivision, but then discussion would be so limited that user experience would not resemble participation in a subculture. Or, more likely, it just wouldn’t be enforced thoroughly enough to stop people from talking about what they want, and the dreaded plethora of duplicated discussions would still result.
The best alternative to this as far as I’m aware is to send the users who are disruptively bad at rational thinking skills to CFAR training.
That seems like an inefficient use of CFAR training (and so an inefficient use of whatever resources that would have to be used to pay CFAR for such training). I’d prefer to just cull those disruptively bad at rational thinking entirely. Some people just cannot be saved (in a way that gives an acceptable cost/benefit ratio). I’d prefer to save whatever attention or resources I was willing to allocate to people-improvement for those that already show clear signs of having thinking potential.
I am among those absolutely hardest to save, having an actual mental illness. Yet this place is the only thing saving me from utter oblivion and madness. Here is where I have met my only real friends ever. Here is the only thing that gives me any sense of meaning, reason to survive, or glimmer of hope. I care fanatically about it.
Many of the rules that have been proposed. Or for that matter even the amount of degradation that has ALREADY occurred… If that had been the case a few years ago, I wouldn’t exist, this body would either be rotting in the ground, or literally occupied by an inhuman monster bent on the destruction of all living things.
I’m fascinated. (I’m a psychology enthusiast who refuses to get a psychology degree because I find many of the flaws with the psychology industry unacceptable). I am very interested in knowing how LessWrong has been saving you from utter oblivion and madness. Would you mind explaining it? Would it be alright with you if I ask you which mental illness?
Would you please also describe the degradation that has occurred at LW?
I’d rather not talk about it in detail, but it boils down to LW in general promoting sanity and connects smart people in general. That extra sanity can be used to cancel out insanity, not just creating super-sanes.
Degradation: Lowered frequency of insightful and useful content, increased frequency of low quality content.
I have to admit I am not sure whether to be more persuaded by you or Armok. I suppose what it would come down to is a cost/benefit calculation that takes into account the amount of destruction saved by the worst as well as the amount of benefit produced by the best. Brilliant people can have quite an impact indeed, but they are rare and it is easier to destroy than to create, so it is not readily apparent to me which group it would be more beneficial to focus on, or if both, in what amount.
Practically speaking, though, CFAR has stated that they have plans to make web apps to help with rationality training and training materials for high schoolers. It seems to me that they have an interest in targeting the mainstream, not just the best thinkers.
I’m glad that someone is doing this, but I also have to wonder if that will mean more forum referrals to LW from the mainstream...
Ctrl+C, Ctrl+V, problem solved.
If you’re suggesting that duplicated discussions can be solved with paste, then you are also suggesting that we not make separate areas.
Think about it.
I suppose you might be suggesting that we copy the OP and not the comments. Often the comments have more content than the OP, and often that content is useful, informative and relevant. So, in the comments we’d then have duplicated information that varied between the two OP copies.
So, we could copy the comments over to the other area… but then they’re not separate...
Not seeing how this is a solution. If you have some different clever way to apply Ctrl+C, Ctrl+V then please let me know.
No, because only the top content in each area would be shared to the others.
This creates a trivial inconvenience.
So add a “promote” button that basicaly does the same automatically.
I assign non-neglible probability to some cause that I not am not specifically aware of (sorta, but not exactly an outside context problem) having a negative impact on LW’s culture.