Rationalists should win. So what’s stopping us? We got a big upgrade to our epistemic rationality from the Sequences, but our instrumental rationality may still be lacking, both individually and especially as groups. (Are there any CFAR instructors or graduates paying attention to this thread?) How would the hypothetical ideal instrumental rationalist approach this problem? That last one is not rhetorical. Post answers below.
Remember why an oracle AI is a small step away from a genie: If we can “epistemically” predict what the ideal agent would do (or even approximate it well enough) then we can take that action ourselves. We still have the subproblems of akrasia and group coordination. We can solve them the same way: how would the ideal agent solve these problems?
I’ll try my hand at answering first below, but remember the wisdom of crowds. Some of you can probably improve on my prediction attempts.
The first step is probably deciding what exactly we want. Remember that values are orthogonal to intelligence. It’s not enough to imagine an ideal instrumental rationalist without also imagining what that rationalist wants.
What does revitalizing LessWrong mean? If we are wildly successful in our endeavor after one year, what does LessWrong look like at that time? Why is LessWrong valuable to you? What makes it worth saving?
Maybe we can do more of that, better.
Again, not rhetorical, I want to know what the rest of you think.
What I think:
When I was young and learned how to read, my knowledge grew, quickly, mainly thanks to childrens’ encyclopedias. But then it tapered off. There was a period where I read even more but didn’t learn as quickly. This was due to the low quality of my available reading material. When I discovered Wikipedia my knowledge grew quickly again, and then tapered off again. There is a great deal of information on the web, but even more noise. Wikipedia is a rare bright spot. The Sequences are the densest source of insight I’ve found since.
I value the concentrated insights. I want more of that. LessWrong delivered more of that, for a time. Distilling knowledge from the deluege of data available at our fingertips is hard work. I’m willing to contribute to that effort, since I stand to gain so much more. That’s what made Wikipedia work. (That’s what made The Pirate Bay work.) LessWrong is the same.
I’m more willing to trust information I find on LessWrong; because the sanity waterline is higher; because if an ignorant actor posts bad information, there’s a much higher chance the community will call them on it here, compared to elsewhere; because we care about truth, not authority, not politics, not some corporation’s shareholders’ pocketbooks. Trust is a valuable thing. I don’t want to give that up.
I value interaction with intelligent people who are willing to change their minds, and are able to change mine, for the better.
I value practical advice that I can use in real life.
I value the community.
There may be more things that haven’t occurred to me yet.
If we can achieve all of that through other sites (Arbital, CFAR, etc.), the best of LessWrong in all but name, that’s fine with me. I don’t value the name itself, but we must have one.
Rationalists should win. So what’s stopping us? We got a big upgrade to our epistemic rationality from the Sequences, but our instrumental rationality may still be lacking, both individually and especially as groups. (Are there any CFAR instructors or graduates paying attention to this thread?) How would the hypothetical ideal instrumental rationalist approach this problem? That last one is not rhetorical. Post answers below.
Remember why an oracle AI is a small step away from a genie: If we can “epistemically” predict what the ideal agent would do (or even approximate it well enough) then we can take that action ourselves. We still have the subproblems of akrasia and group coordination. We can solve them the same way: how would the ideal agent solve these problems?
I’ll try my hand at answering first below, but remember the wisdom of crowds. Some of you can probably improve on my prediction attempts.
The first step is probably deciding what exactly we want. Remember that values are orthogonal to intelligence. It’s not enough to imagine an ideal instrumental rationalist without also imagining what that rationalist wants.
What does revitalizing LessWrong mean? If we are wildly successful in our endeavor after one year, what does LessWrong look like at that time? Why is LessWrong valuable to you? What makes it worth saving?
Maybe we can do more of that, better.
Again, not rhetorical, I want to know what the rest of you think.
What I think:
When I was young and learned how to read, my knowledge grew, quickly, mainly thanks to childrens’ encyclopedias. But then it tapered off. There was a period where I read even more but didn’t learn as quickly. This was due to the low quality of my available reading material. When I discovered Wikipedia my knowledge grew quickly again, and then tapered off again. There is a great deal of information on the web, but even more noise. Wikipedia is a rare bright spot. The Sequences are the densest source of insight I’ve found since.
I value the concentrated insights. I want more of that. LessWrong delivered more of that, for a time. Distilling knowledge from the deluege of data available at our fingertips is hard work. I’m willing to contribute to that effort, since I stand to gain so much more. That’s what made Wikipedia work. (That’s what made The Pirate Bay work.) LessWrong is the same.
I’m more willing to trust information I find on LessWrong; because the sanity waterline is higher; because if an ignorant actor posts bad information, there’s a much higher chance the community will call them on it here, compared to elsewhere; because we care about truth, not authority, not politics, not some corporation’s shareholders’ pocketbooks. Trust is a valuable thing. I don’t want to give that up.
I value interaction with intelligent people who are willing to change their minds, and are able to change mine, for the better.
I value practical advice that I can use in real life.
I value the community.
There may be more things that haven’t occurred to me yet.
If we can achieve all of that through other sites (Arbital, CFAR, etc.), the best of LessWrong in all but name, that’s fine with me. I don’t value the name itself, but we must have one.