I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.