I’m happy to see a push for increased empiricism and scientific effort on LW. But… I wish there were more focus on the word “how,” and less focus on the word ‘we.’
First, there’s insufficient focus on what concrete steps you are taking to move the culture in that direction. (Writing blog posts exhorting action does not count for much. Do you think The Neglected Virtue of Scholarship would have shifted community actions as much if lukeprog hadn’t followed it up by writing posts with massive reference lists?) The reference to yourmorals.org is fine, but what made that site important was a particular feature, not its goal or its structure. If you’ve thought of a similar feature that someone (ideally you) could code up, great! I will send as much karma as I can towards the person that makes that happen. But this is even more general than a call for better / easier rationality tests and exercises, and thus even less likely to cause concrete action.
Second, it really does help to be a specialist and know the prior art in a subject. The central lesson of experimental psychology is probably “designing experiments that test what you want them to test is really, really hard.” If there’s a specialist out there researching this stuff, then I would be happy to take part in any experiments they post on LW, and I suspect that many others here would be as well. If CFAR moves from advocacy and education to research (on cognitive science, not education), I again expect that I’d be willing to participate and so would others.
Similarly to trying to push the boundaries of life extension rather than simply making past the mean of the life expectancy, trying to push the boundaries of science when you don’t know where those boundaries are is fundamentally mistaken. Knowing what experiments have already been done and what they actually show should be a major input into what you test. The Neglected Virtue of Scholarship calls out Eliezer on exactly that- “er, your n=1 theory of procrastination seems to disagree with n>1 research.” I remember being fascinated by all the variants of the Wason selection task described in Thinking and Deciding. I had previously only been familiar with the basic one, and the implications of both the original and the variations are far stronger than the implications of just the original.
(Note that one of the strengths of LW might be that you gather a bunch of neurologically similar people, who can share with each other knowledge and experience not useful to the general population. I have the same experience of procrastination as Eliezer, and learning that someone else out there has that issue is valuable knowledge. Given general human neurodiversity, looking for things that help everyone is probably going to be less useful than narrowing your view.)
Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn’t in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?
I agree with training citizen scientists in the sense of training empiricists (who will then naturally apply science to their lives). I think that LW having a culture of supporting science- both with dollars and volunteerism- would be better than not. But I don’t see you addressing the engineering problems with moving from one culture to the other, instead of just signalling that you would prefer the other culture.
A little over a week ago me and two other LWers started doing research on the possibilities of an online rationality class. The goal the project is to have an official proposal as well as a beta version ready in a few months. Besides this hopefully helping spread friendly memes and giving publicity, we aim to figure out if this can be used as a tool to make progress on the difficult problems of teaching and measuring rationality. Best way to figure it out is to try and use it that way as we iterate.
I name dropped the proposal in the OP but since we started so recently it felt odd writing an article about that first.
Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn’t in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?
I kind of meant this under “attracting the right crowd” but I should have made it explicit.
But I don’t see you addressing the engineering problems with moving from one culture to the other, instead of just signalling that you would prefer the other culture.
The reason for this is that I’m unsure how to do this and didn’t want to get people locked into my plan to change it. Also I hoped “come up with stuff that needs testing!” would show me if I was wrong or not on the insufficient emphasis on empiricism in the community.
What would make LW valuable enough to them for them to post here?
It would very much surprise me if the goal of creating a space where experts in AI consider it valuable to post were not in tension with the goal of doing large-scale outreach (a la HPMOR).
This suggests that if LW’s sponsors want to attract participation from experts in AI who sufficiently embrace the community’s norms (e.g., that the rational position for anyone researching AI is to concentrate first on reliable Friendliness, lest they inadvertently destroy the world) to be valuable, it might be practical to do it in a space separate from where it attracts participation from the community at large (e.g., folks like me).
Agreed. My expectation of a first step would be something like inviting relevant experts to give a talk at CFAR. Possibly tape it / have someone at CFAR turn it into a post and put it on LW, but the immediate goal would be the relationship with the expert, which is probably much more easily done by having one visible and friendly person at CFAR they know.
Huh, that’s not a bad idea, actually. Had there ever been an attempt by the SIAI or CFAR (I’m not sure which is responsible for what, to be honest) to Kickstarter some project related to rationality or AI ? Something with a clear, measurable goal, like, “give us a million dollars and we’ll give you a Friendly Oracle-grade AI”—though less ambitious than that, obviously.
If a guy with a webcomic can raise a million dollars to make a video game, maybe we can fund research by people who know how to do research well.
If you can convince 12,000 people to donate an average of $100 to an efficient research organization, great! I am not optimistic about your prospects, though.
I’m happy to see a push for increased empiricism and scientific effort on LW. But… I wish there were more focus on the word “how,” and less focus on the word ‘we.’
Three articles come to mind: To Lead You Must Stand Up, First, Try to Make it to the Mean, and Money: The Unit of Caring. (Only the first part of the second article will be directly relevant, but the latter parts are indirectly relevant.)
That is:
First, there’s insufficient focus on what concrete steps you are taking to move the culture in that direction. (Writing blog posts exhorting action does not count for much. Do you think The Neglected Virtue of Scholarship would have shifted community actions as much if lukeprog hadn’t followed it up by writing posts with massive reference lists?) The reference to yourmorals.org is fine, but what made that site important was a particular feature, not its goal or its structure. If you’ve thought of a similar feature that someone (ideally you) could code up, great! I will send as much karma as I can towards the person that makes that happen. But this is even more general than a call for better / easier rationality tests and exercises, and thus even less likely to cause concrete action.
Second, it really does help to be a specialist and know the prior art in a subject. The central lesson of experimental psychology is probably “designing experiments that test what you want them to test is really, really hard.” If there’s a specialist out there researching this stuff, then I would be happy to take part in any experiments they post on LW, and I suspect that many others here would be as well. If CFAR moves from advocacy and education to research (on cognitive science, not education), I again expect that I’d be willing to participate and so would others.
Similarly to trying to push the boundaries of life extension rather than simply making past the mean of the life expectancy, trying to push the boundaries of science when you don’t know where those boundaries are is fundamentally mistaken. Knowing what experiments have already been done and what they actually show should be a major input into what you test. The Neglected Virtue of Scholarship calls out Eliezer on exactly that- “er, your n=1 theory of procrastination seems to disagree with n>1 research.” I remember being fascinated by all the variants of the Wason selection task described in Thinking and Deciding. I had previously only been familiar with the basic one, and the implications of both the original and the variations are far stronger than the implications of just the original.
(Note that one of the strengths of LW might be that you gather a bunch of neurologically similar people, who can share with each other knowledge and experience not useful to the general population. I have the same experience of procrastination as Eliezer, and learning that someone else out there has that issue is valuable knowledge. Given general human neurodiversity, looking for things that help everyone is probably going to be less useful than narrowing your view.)
Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn’t in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?
I agree with training citizen scientists in the sense of training empiricists (who will then naturally apply science to their lives). I think that LW having a culture of supporting science- both with dollars and volunteerism- would be better than not. But I don’t see you addressing the engineering problems with moving from one culture to the other, instead of just signalling that you would prefer the other culture.
A little over a week ago me and two other LWers started doing research on the possibilities of an online rationality class. The goal the project is to have an official proposal as well as a beta version ready in a few months. Besides this hopefully helping spread friendly memes and giving publicity, we aim to figure out if this can be used as a tool to make progress on the difficult problems of teaching and measuring rationality. Best way to figure it out is to try and use it that way as we iterate.
I name dropped the proposal in the OP but since we started so recently it felt odd writing an article about that first.
I kind of meant this under “attracting the right crowd” but I should have made it explicit.
The reason for this is that I’m unsure how to do this and didn’t want to get people locked into my plan to change it. Also I hoped “come up with stuff that needs testing!” would show me if I was wrong or not on the insufficient emphasis on empiricism in the community.
I was rereading my comments (because of this post) and noticed this:
Apparently, links to drafts of novel, interesting math papers.
It would very much surprise me if the goal of creating a space where experts in AI consider it valuable to post were not in tension with the goal of doing large-scale outreach (a la HPMOR).
This suggests that if LW’s sponsors want to attract participation from experts in AI who sufficiently embrace the community’s norms (e.g., that the rational position for anyone researching AI is to concentrate first on reliable Friendliness, lest they inadvertently destroy the world) to be valuable, it might be practical to do it in a space separate from where it attracts participation from the community at large (e.g., folks like me).
Agreed. My expectation of a first step would be something like inviting relevant experts to give a talk at CFAR. Possibly tape it / have someone at CFAR turn it into a post and put it on LW, but the immediate goal would be the relationship with the expert, which is probably much more easily done by having one visible and friendly person at CFAR they know.
Money?
If a guy with a webcomic can raise a million dollars to make a video game, maybe we can fund research by people who know how to do research well.
Huh, that’s not a bad idea, actually. Had there ever been an attempt by the SIAI or CFAR (I’m not sure which is responsible for what, to be honest) to Kickstarter some project related to rationality or AI ? Something with a clear, measurable goal, like, “give us a million dollars and we’ll give you a Friendly Oracle-grade AI”—though less ambitious than that, obviously.
If you can convince 12,000 people to donate an average of $100 to an efficient research organization, great! I am not optimistic about your prospects, though.