It seems like a good idea to explicitly discuss the presentation of the meme and how to effectively spread it (in particular, how many people should do it without risking damage). I also imagine an important ingredient is tools for quantitative analysis. For example, how easy is it to design links to LW or other resources that count click-throughs? How legally/socially questionable is it to compute aggregate statistics like return rates or retention time (for people with static IPs) for the users who clicked through?
What happens to the popular perception of causes which are frequently (hopefully indirectly) advanced on online communities such as Reddit? I can easily imagine rationality “seeds” doing as much harm as good on the internet, frankly.
LW is not a very good place to link to. The creation of more condensed resources to point someone to actually is probably actually more important than any of these considerations. In particular, you probably only get something like 30 seconds to convince someone that they should stay. You only get a little more after that to convince them that it is even possible that you have something new and interesting to say. You also need to have some hope of eventually convincing a skeptic, which its really not clear LW can do (it provides a lot of ammunition against itself if a normal person has an internal argument about whether, say, reading the Sequences is a good idea). Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Well, we’ve had some threads to workshop ideas (1, 2). Results are variable—but that’s okay. The main thing I would suggest is to keep brainstorming ideas.
(e.g. perhaps a new rationality blog that isn’t LessWrong. Perhaps a hundred new rationality blogs that aren’t LessWrong. I keep telling ciphergoth to start blogging the things he says so eloquently and concisely in person … I posted today in my journal about a matter that’s arguably of rationalist concern in a world of woo—and it’s been getting great comments—so it doesn’t even require surmounting the barrier of bothering to set up a whole new blog. Make rationalist-interest posts to your own blogs!)
Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Since you are definitely the type of person I want LW to attract, I would be interested in anything you could remember about those first several hours of visiting.
In particular I am interested in the effect of the talk here about AGI research’s being at the same time a potent threat to human life and human civilization and an extremely effective form of philanthropy.
At the time I said “LW readers would be uniformly better served by applying their rationality than developing it further.”
The sentiment is still somewhat applicable; the difference between then and now is that then I believed that my own rationality had reached the point where improvements were useless. LessWrong did nothing to convince me otherwise, where this should have been its first priority if it was trying to change my behavior.
(My first real posts to LessWrong were amusinggames vaguely related to a naive conception of safe AGI)
It seems like a good idea to explicitly discuss the presentation of the meme and how to effectively spread it (in particular, how many people should do it without risking damage). I also imagine an important ingredient is tools for quantitative analysis. For example, how easy is it to design links to LW or other resources that count click-throughs? How legally/socially questionable is it to compute aggregate statistics like return rates or retention time (for people with static IPs) for the users who clicked through?
What happens to the popular perception of causes which are frequently (hopefully indirectly) advanced on online communities such as Reddit? I can easily imagine rationality “seeds” doing as much harm as good on the internet, frankly.
LW is not a very good place to link to. The creation of more condensed resources to point someone to actually is probably actually more important than any of these considerations. In particular, you probably only get something like 30 seconds to convince someone that they should stay. You only get a little more after that to convince them that it is even possible that you have something new and interesting to say. You also need to have some hope of eventually convincing a skeptic, which its really not clear LW can do (it provides a lot of ammunition against itself if a normal person has an internal argument about whether, say, reading the Sequences is a good idea). Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Well, we’ve had some threads to workshop ideas (1, 2). Results are variable—but that’s okay. The main thing I would suggest is to keep brainstorming ideas.
(e.g. perhaps a new rationality blog that isn’t LessWrong. Perhaps a hundred new rationality blogs that aren’t LessWrong. I keep telling ciphergoth to start blogging the things he says so eloquently and concisely in person … I posted today in my journal about a matter that’s arguably of rationalist concern in a world of woo—and it’s been getting great comments—so it doesn’t even require surmounting the barrier of bothering to set up a whole new blog. Make rationalist-interest posts to your own blogs!)
Since you are definitely the type of person I want LW to attract, I would be interested in anything you could remember about those first several hours of visiting.
In particular I am interested in the effect of the talk here about AGI research’s being at the same time a potent threat to human life and human civilization and an extremely effective form of philanthropy.
At the time I said “LW readers would be uniformly better served by applying their rationality than developing it further.”
The sentiment is still somewhat applicable; the difference between then and now is that then I believed that my own rationality had reached the point where improvements were useless. LessWrong did nothing to convince me otherwise, where this should have been its first priority if it was trying to change my behavior.
(My first real posts to LessWrong were amusing games vaguely related to a naive conception of safe AGI)
When you wrote that, did you mean applying it to the problems of society, to personal problems like wealth creation, or to both?
Both.