Briefly: If you want to use instrumental rationality to guide your actions, you need first to make explicit what you value. If we as a community want to use instrumental rationality to guide collective action, we need first to make explicit the values that we share. I think this is doable but not easy.
I like what I think is the motivating idea behind the original post: we want examples of people using the instrumental rationality they’ve learned here. If nothing else, this gives us more feedback on what seems to work and what doesn’t; what is easy to implement and what is difficult. Mostly, though, even a set of good heuristics for instrumental rationality seems like it ought to improve our lives.
Further, I understand the idea behind this challenge. It’s much easier for a hundred people to take a hundred separate actions than it is for those hundred people to collaborate on something large: small, separate actions can fail independently, and don’t require organizational overhead—and one hundred coordinated commitments to that organization. A light structure makes a lot of sense.
But this post is problematic. Our notion of instrumental rationality doesn’t really supply the values we want to optimize; it only helps to direct our actions to better optimize the values we make explicit. You might value kindness for its own sake, or ending the suffering of others, or improving the aesthetic experience of others, or other things. Human values differ, and making your own values explicit is difficult). Differing values should dictate varying actions. So, there’s a problem where the original post assumes an unnamed set of “shared values of this community.”
What’s more, I suspect that most readers here haven’t really tried to
make their personal values that explicit. I could be totally wrong in
this, of course, but I’ve spent a few spare moments today trying to list
the various things I actually value, and was surprised at how widely they varied and how difficult it is to wade through my own psyche for those values.
Even still, I suspect that the first step in applying instrumental rationality, even in broad strokes, is to get at least a rough idea of what to optimize. I don’t propose analyzing everything down to its finest details, but I’m pretty certain that a rough outline is a necessary starting point, just so the solutions you consider point in useful directions. (I’m setting aside some time to do this self-inventory soon.)
Similarly, any collective action by the LessWrong community should start by thinking out the values that the community actually shares. There are a few reasonable guesses—I expect we all value rationality, science, and knowledge; we probably all value decreasing global suffering and increasing global happiness. But even these weak assertions are broad strokes of only little use in deciding between actions.
In particular, these assertions of our group values, if true, do little to control expectation. An explicit value is a belief; it should control our expectation about which outcomes would be the most satisfying, in some coherent sense. [1] We might be able to find such explicit values fairly quickly, by judging the emotional reaction we have to hypothetical outcomes. (We do seem to have pretty decent emotional self-emulation hardware.)
And if it turns out that, as a group, we’re all convinced that the best use of our time is to change some specific thing out in the world, and that we actually need our group’s learned rationality to do that thing, then we should do that. Otherwise, save it until you know the problem you would seek to solve.
[1]: A coherent sense which I’m eliding here. I think this is a sensible assertion, but I’ve been writing this comment now for an hour. [2]
[2]: I’m starting to suspect that I should make a top-level post of this, and devote the appropriate time to it.
I don’t know of a way to get superscripts with Markdown markup, but if you pull up your Windows Character Map (or your operating system’s equivalent), there should be superscript 1 and 2 characters to paste in.
Objections along these lines having been made by several users here, I take the point. I assumed greater homogeneity of value than was warranted and didn’t make the links between values and specific actions I recommended.
Briefly: If you want to use instrumental rationality to guide your actions, you need first to make explicit what you value. If we as a community want to use instrumental rationality to guide collective action, we need first to make explicit the values that we share. I think this is doable but not easy.
I like what I think is the motivating idea behind the original post: we want examples of people using the instrumental rationality they’ve learned here. If nothing else, this gives us more feedback on what seems to work and what doesn’t; what is easy to implement and what is difficult. Mostly, though, even a set of good heuristics for instrumental rationality seems like it ought to improve our lives.
Further, I understand the idea behind this challenge. It’s much easier for a hundred people to take a hundred separate actions than it is for those hundred people to collaborate on something large: small, separate actions can fail independently, and don’t require organizational overhead—and one hundred coordinated commitments to that organization. A light structure makes a lot of sense.
But this post is problematic. Our notion of instrumental rationality doesn’t really supply the values we want to optimize; it only helps to direct our actions to better optimize the values we make explicit. You might value kindness for its own sake, or ending the suffering of others, or improving the aesthetic experience of others, or other things. Human values differ, and making your own values explicit is difficult). Differing values should dictate varying actions. So, there’s a problem where the original post assumes an unnamed set of “shared values of this community.”
What’s more, I suspect that most readers here haven’t really tried to make their personal values that explicit. I could be totally wrong in this, of course, but I’ve spent a few spare moments today trying to list the various things I actually value, and was surprised at how widely they varied and how difficult it is to wade through my own psyche for those values.
Even still, I suspect that the first step in applying instrumental rationality, even in broad strokes, is to get at least a rough idea of what to optimize. I don’t propose analyzing everything down to its finest details, but I’m pretty certain that a rough outline is a necessary starting point, just so the solutions you consider point in useful directions. (I’m setting aside some time to do this self-inventory soon.)
Similarly, any collective action by the LessWrong community should start by thinking out the values that the community actually shares. There are a few reasonable guesses—I expect we all value rationality, science, and knowledge; we probably all value decreasing global suffering and increasing global happiness. But even these weak assertions are broad strokes of only little use in deciding between actions.
In particular, these assertions of our group values, if true, do little to control expectation. An explicit value is a belief; it should control our expectation about which outcomes would be the most satisfying, in some coherent sense. [1] We might be able to find such explicit values fairly quickly, by judging the emotional reaction we have to hypothetical outcomes. (We do seem to have pretty decent emotional self-emulation hardware.)
And if it turns out that, as a group, we’re all convinced that the best use of our time is to change some specific thing out in the world, and that we actually need our group’s learned rationality to do that thing, then we should do that. Otherwise, save it until you know the problem you would seek to solve.
[1]: A coherent sense which I’m eliding here. I think this is a sensible assertion, but I’ve been writing this comment now for an hour. [2]
[2]: I’m starting to suspect that I should make a top-level post of this, and devote the appropriate time to it.
I don’t know of a way to get superscripts with Markdown markup, but if you pull up your Windows Character Map (or your operating system’s equivalent), there should be superscript 1 and 2 characters to paste in.
Objections along these lines having been made by several users here, I take the point. I assumed greater homogeneity of value than was warranted and didn’t make the links between values and specific actions I recommended.
As for the top-level post—I’d love to see it.