Ok, that second suggestion was not: let’s call ourself one of these three things (LW or SSC or EA), I suggested we drop ‘rationalist’ in general and split our community into (these and other) subcommunities. And I’m not sure I agree with you on some terminology either.
I would call myself an Effective Altruist even though I don’t donate 10% (I’m a studying ethics to work for EA later), because I’m on the giving what we can pledge and I’m active in my local EA community.
And EY’s blog was never as coherent as people say it was. But lets be extremely charitable and cut away all his other interest in AI, economics etc and only talk about: 1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict. The first is the philosophy of realism and the second is pragmatism, two irreconcilable philosophies. I’ve always dropped realism in favor of pragmatism and apparently that makes me a post-rationalist now? Do people realize that you can’t always do both?
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
From my perspective, the narrative behind the Sequences was like this: “The superhuman artificial intelligence could easily kill us all, for reasons that have nothing to do with Terminator movies, but instead are like Goodhart’s law on steroids. It would require extraordinary work to create an intelligence that has human-compatible values and doesn’t screw up things on accident. Such work would require smart people who have unconfused thinking about human values and intelligence. Unfortunately, even highly intelligent people get easily confused about important things. Here is why people are naturally so confused, and here is how to look at those important things properly. (Here is some fictional evidence about doing rationality better.)”
1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict.
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield. You are so fast that you leave all the losers behind. Then something unexpected happens and you die. (Metaphorically speaking, unless you are Steve Jobs.) A certain fraction of people survives the minefield, and then books and movies are made celebrating their strategy; failing to mention the people who used the same strategy and died. To me it seems like an open question whether such strategy is actually better on average. (Though maybe this is just my ignorance speaking.)
In real life, many people who try to have accurate beliefs are failing, often for predictable reasons. So, maybe this whole project is indeed as doomed as you see it. But maybe there are other factors. For example, both “trying to have accurate beliefs” and “failing at life” could be statistical consequences of being on the autistic spectrum. In that case, if you already happen to be on the spectrum, you cannot get rid of the bad consequences by abandoning the desire to have accurate beliefs. Another possible angle is that “trying to have accurate beliefs” is most fruitful when you associate with people who have the same values. Most of human knowledge is a result of collaboration. In such case, creating a community of people who share these values is the right move.
I don’t want to go too deep in “the true X has never been tried yet” territory, but to me LW-style rationality seems like rather new project, which could possibly bring new fruit. (The predecessors in the same reference class are, I suppose, General semantics and Randian objectivism.) So maybe there is a way to success that doesn’t involve self-deception. At least for myself, I don’t see a better option. But this may be about my personality, so I don’t want to generalize to other people. Actually, it seems like for most people, LW-style rationality is not an option.
I suppose my point is that Less Wrong philosophy—the attempt to reconcile search for truth with winning at life—is a meaningful project; although maybe only for some kinds of people (not meant as a value judgment, but: different personality types exist and different strategies work for them).
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
It didn’t, because you couldn’t even if you wanted to. You don’t know me personally so why would I assume you were attacking me personally? I was merely trying to state a terminological disagreement in an attempt to change the readers hidden inference.
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield.
This is not what philosophical pragmatism is about. With pragmatism you learn what is useful which in 99.999% of cases will be the thing thats accurate. Note that I said:
Do people realize that you can’t always do both? [emphasis added]
But philosophy is all about the edge cases. What do you do when there is knowledge that is dangerous for humanity’s survival? Do you learn things that are probably memetic hazards? Realism says ‘yes’, Pragmatism says ‘no’. Pragmatism is about ‘winning’, realism is about ‘truth’. If somehow you can show that these clearly opposed philosophies are actually reconcilable you will win all thephilosophy awards. Until that time, I choose winning.
OK, thanks for explanation. The part about avoiding memetic hazards… seems like a valuable thing to do, but also seems to me that in practice most attempts to avoid memetic hazards have second-order effects. (Obvious counter-argument: if there are successful cases of avoiding memetic hazards that do not have side effects, I would probably not know about them. An important part of keeping a secret is never mentioning that there is a secret.)
But this would be a debate for another day. Maybe an entire field of research: how to communicate infohazards. (If you found it, there is a chance other people will, too. How can you decrease that probability, without doing things that will likely blow back later.)
In the meanwhile, if in most cases the accurate thing is the useful thing, and if we don’t know how to handle the remaining cases, I feel okay going for the accurate thing. (This is probably easier for me, because I personally don’t do anything important on a large scale, so I don’t have to worry about accidentally destroying humanity.)
Ok, that second suggestion was not: let’s call ourself one of these three things (LW or SSC or EA), I suggested we drop ‘rationalist’ in general and split our community into (these and other) subcommunities. And I’m not sure I agree with you on some terminology either. I would call myself an Effective Altruist even though I don’t donate 10% (I’m a studying ethics to work for EA later), because I’m on the giving what we can pledge and I’m active in my local EA community.
And EY’s blog was never as coherent as people say it was. But lets be extremely charitable and cut away all his other interest in AI, economics etc and only talk about: 1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict. The first is the philosophy of realism and the second is pragmatism, two irreconcilable philosophies. I’ve always dropped realism in favor of pragmatism and apparently that makes me a post-rationalist now? Do people realize that you can’t always do both?
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
From my perspective, the narrative behind the Sequences was like this: “The superhuman artificial intelligence could easily kill us all, for reasons that have nothing to do with Terminator movies, but instead are like Goodhart’s law on steroids. It would require extraordinary work to create an intelligence that has human-compatible values and doesn’t screw up things on accident. Such work would require smart people who have unconfused thinking about human values and intelligence. Unfortunately, even highly intelligent people get easily confused about important things. Here is why people are naturally so confused, and here is how to look at those important things properly. (Here is some fictional evidence about doing rationality better.)”
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield. You are so fast that you leave all the losers behind. Then something unexpected happens and you die. (Metaphorically speaking, unless you are Steve Jobs.) A certain fraction of people survives the minefield, and then books and movies are made celebrating their strategy; failing to mention the people who used the same strategy and died. To me it seems like an open question whether such strategy is actually better on average. (Though maybe this is just my ignorance speaking.)
In real life, many people who try to have accurate beliefs are failing, often for predictable reasons. So, maybe this whole project is indeed as doomed as you see it. But maybe there are other factors. For example, both “trying to have accurate beliefs” and “failing at life” could be statistical consequences of being on the autistic spectrum. In that case, if you already happen to be on the spectrum, you cannot get rid of the bad consequences by abandoning the desire to have accurate beliefs. Another possible angle is that “trying to have accurate beliefs” is most fruitful when you associate with people who have the same values. Most of human knowledge is a result of collaboration. In such case, creating a community of people who share these values is the right move.
I don’t want to go too deep in “the true X has never been tried yet” territory, but to me LW-style rationality seems like rather new project, which could possibly bring new fruit. (The predecessors in the same reference class are, I suppose, General semantics and Randian objectivism.) So maybe there is a way to success that doesn’t involve self-deception. At least for myself, I don’t see a better option. But this may be about my personality, so I don’t want to generalize to other people. Actually, it seems like for most people, LW-style rationality is not an option.
I suppose my point is that Less Wrong philosophy—the attempt to reconcile search for truth with winning at life—is a meaningful project; although maybe only for some kinds of people (not meant as a value judgment, but: different personality types exist and different strategies work for them).
It didn’t, because you couldn’t even if you wanted to. You don’t know me personally so why would I assume you were attacking me personally? I was merely trying to state a terminological disagreement in an attempt to change the readers hidden inference.
This is not what philosophical pragmatism is about. With pragmatism you learn what is useful which in 99.999% of cases will be the thing thats accurate. Note that I said:
But philosophy is all about the edge cases. What do you do when there is knowledge that is dangerous for humanity’s survival? Do you learn things that are probably memetic hazards? Realism says ‘yes’, Pragmatism says ‘no’. Pragmatism is about ‘winning’, realism is about ‘truth’. If somehow you can show that these clearly opposed philosophies are actually reconcilable you will win all the philosophy awards. Until that time, I choose winning.
OK, thanks for explanation. The part about avoiding memetic hazards… seems like a valuable thing to do, but also seems to me that in practice most attempts to avoid memetic hazards have second-order effects. (Obvious counter-argument: if there are successful cases of avoiding memetic hazards that do not have side effects, I would probably not know about them. An important part of keeping a secret is never mentioning that there is a secret.)
But this would be a debate for another day. Maybe an entire field of research: how to communicate infohazards. (If you found it, there is a chance other people will, too. How can you decrease that probability, without doing things that will likely blow back later.)
In the meanwhile, if in most cases the accurate thing is the useful thing, and if we don’t know how to handle the remaining cases, I feel okay going for the accurate thing. (This is probably easier for me, because I personally don’t do anything important on a large scale, so I don’t have to worry about accidentally destroying humanity.)