I think that developing optimized rationality seeds is a particularly promising route for the first step.
This should certainly be done, but its a hard problem to attack. The environment in which the seed is encountered has some significant influence on how a reader will interact with it. If you like, you can view my problem as designing and drawing an audience to an environment in which existing persuasive techniques are adequate.
I think if you really believe that it is possible to design a virus in this way, you should work at it more aggressively. I am not too optimistic, largely because of the difficulty of experimentation and testing.
We live in a sea of viciously competitive memes, many created by people for a living—books, songs, advertising, culture in general.
Although science has learnt a huge amount about human cognitive biases in just the past few decades, it would be an error to assume that we therefore knew nothing before that. We knew really quite a lot, enough to have excellent working knowledge of how to approach the problem of creating a viable meme. And it turns out that if you want to create compelling memes, throwing lots of people at the problem, who all want the meme with their name on it to succeed, works quite well.
Designing a really compelling meme is in the “get other people to do what you want” class of problems that we grew brains for. It’s something that plays to our natural aptitudes.
So just coming up with rationality seeds and throwing them out there into the memetic soup is a reasonable first approach, I think. Unless the seed is actually defective, it’s very unlikely to do actual damage.
It seems like a good idea to explicitly discuss the presentation of the meme and how to effectively spread it (in particular, how many people should do it without risking damage). I also imagine an important ingredient is tools for quantitative analysis. For example, how easy is it to design links to LW or other resources that count click-throughs? How legally/socially questionable is it to compute aggregate statistics like return rates or retention time (for people with static IPs) for the users who clicked through?
What happens to the popular perception of causes which are frequently (hopefully indirectly) advanced on online communities such as Reddit? I can easily imagine rationality “seeds” doing as much harm as good on the internet, frankly.
LW is not a very good place to link to. The creation of more condensed resources to point someone to actually is probably actually more important than any of these considerations. In particular, you probably only get something like 30 seconds to convince someone that they should stay. You only get a little more after that to convince them that it is even possible that you have something new and interesting to say. You also need to have some hope of eventually convincing a skeptic, which its really not clear LW can do (it provides a lot of ammunition against itself if a normal person has an internal argument about whether, say, reading the Sequences is a good idea). Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Well, we’ve had some threads to workshop ideas (1, 2). Results are variable—but that’s okay. The main thing I would suggest is to keep brainstorming ideas.
(e.g. perhaps a new rationality blog that isn’t LessWrong. Perhaps a hundred new rationality blogs that aren’t LessWrong. I keep telling ciphergoth to start blogging the things he says so eloquently and concisely in person … I posted today in my journal about a matter that’s arguably of rationalist concern in a world of woo—and it’s been getting great comments—so it doesn’t even require surmounting the barrier of bothering to set up a whole new blog. Make rationalist-interest posts to your own blogs!)
Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Since you are definitely the type of person I want LW to attract, I would be interested in anything you could remember about those first several hours of visiting.
In particular I am interested in the effect of the talk here about AGI research’s being at the same time a potent threat to human life and human civilization and an extremely effective form of philanthropy.
At the time I said “LW readers would be uniformly better served by applying their rationality than developing it further.”
The sentiment is still somewhat applicable; the difference between then and now is that then I believed that my own rationality had reached the point where improvements were useless. LessWrong did nothing to convince me otherwise, where this should have been its first priority if it was trying to change my behavior.
(My first real posts to LessWrong were amusinggames vaguely related to a naive conception of safe AGI)
I recognize the difficulty and environmental dependence, and I still think it’s the way to go.
In the comment below, you say that you spent several hours here not being convinced and that resources need to be more condensed so as to get people to stick around.
That’s exactly the type of thing I’m talking about. Spend the first 30 seconds getting them to stay for a bit longer, and then that time sinking the hook deeper by making condensed arguments that they’ll accept as at least plausible and will need to look into- or something like that. It’s not an all or nothing thing, and I think there’s room for a lot of improvement on the margin.
I am working on a very specific form of the problem very aggressively. Still mostly building skills/understanding that will allow me to tackle the problem though, so the effort is in the same direction as the general problem.
This should certainly be done, but its a hard problem to attack. The environment in which the seed is encountered has some significant influence on how a reader will interact with it. If you like, you can view my problem as designing and drawing an audience to an environment in which existing persuasive techniques are adequate.
I think if you really believe that it is possible to design a virus in this way, you should work at it more aggressively. I am not too optimistic, largely because of the difficulty of experimentation and testing.
We live in a sea of viciously competitive memes, many created by people for a living—books, songs, advertising, culture in general.
Although science has learnt a huge amount about human cognitive biases in just the past few decades, it would be an error to assume that we therefore knew nothing before that. We knew really quite a lot, enough to have excellent working knowledge of how to approach the problem of creating a viable meme. And it turns out that if you want to create compelling memes, throwing lots of people at the problem, who all want the meme with their name on it to succeed, works quite well.
Designing a really compelling meme is in the “get other people to do what you want” class of problems that we grew brains for. It’s something that plays to our natural aptitudes.
So just coming up with rationality seeds and throwing them out there into the memetic soup is a reasonable first approach, I think. Unless the seed is actually defective, it’s very unlikely to do actual damage.
It seems like a good idea to explicitly discuss the presentation of the meme and how to effectively spread it (in particular, how many people should do it without risking damage). I also imagine an important ingredient is tools for quantitative analysis. For example, how easy is it to design links to LW or other resources that count click-throughs? How legally/socially questionable is it to compute aggregate statistics like return rates or retention time (for people with static IPs) for the users who clicked through?
What happens to the popular perception of causes which are frequently (hopefully indirectly) advanced on online communities such as Reddit? I can easily imagine rationality “seeds” doing as much harm as good on the internet, frankly.
LW is not a very good place to link to. The creation of more condensed resources to point someone to actually is probably actually more important than any of these considerations. In particular, you probably only get something like 30 seconds to convince someone that they should stay. You only get a little more after that to convince them that it is even possible that you have something new and interesting to say. You also need to have some hope of eventually convincing a skeptic, which its really not clear LW can do (it provides a lot of ammunition against itself if a normal person has an internal argument about whether, say, reading the Sequences is a good idea). Less Wrong failed to convince me that I should care after several hours of visiting (I would certainly not be here if I didn’t interact in person with any other users), and I would describe myself as a pretty easy sell.
Well, we’ve had some threads to workshop ideas (1, 2). Results are variable—but that’s okay. The main thing I would suggest is to keep brainstorming ideas.
(e.g. perhaps a new rationality blog that isn’t LessWrong. Perhaps a hundred new rationality blogs that aren’t LessWrong. I keep telling ciphergoth to start blogging the things he says so eloquently and concisely in person … I posted today in my journal about a matter that’s arguably of rationalist concern in a world of woo—and it’s been getting great comments—so it doesn’t even require surmounting the barrier of bothering to set up a whole new blog. Make rationalist-interest posts to your own blogs!)
Since you are definitely the type of person I want LW to attract, I would be interested in anything you could remember about those first several hours of visiting.
In particular I am interested in the effect of the talk here about AGI research’s being at the same time a potent threat to human life and human civilization and an extremely effective form of philanthropy.
At the time I said “LW readers would be uniformly better served by applying their rationality than developing it further.”
The sentiment is still somewhat applicable; the difference between then and now is that then I believed that my own rationality had reached the point where improvements were useless. LessWrong did nothing to convince me otherwise, where this should have been its first priority if it was trying to change my behavior.
(My first real posts to LessWrong were amusing games vaguely related to a naive conception of safe AGI)
When you wrote that, did you mean applying it to the problems of society, to personal problems like wealth creation, or to both?
Both.
I recognize the difficulty and environmental dependence, and I still think it’s the way to go.
In the comment below, you say that you spent several hours here not being convinced and that resources need to be more condensed so as to get people to stick around.
That’s exactly the type of thing I’m talking about. Spend the first 30 seconds getting them to stay for a bit longer, and then that time sinking the hook deeper by making condensed arguments that they’ll accept as at least plausible and will need to look into- or something like that. It’s not an all or nothing thing, and I think there’s room for a lot of improvement on the margin.
I am working on a very specific form of the problem very aggressively. Still mostly building skills/understanding that will allow me to tackle the problem though, so the effort is in the same direction as the general problem.