Thanks for engaging! There’s a lot here I agree with—in particular, the concept of pessimization does seem like a dangerous one which could be used to demoralize people. I also think psychoanalyzing me is fair game here, and that it would be a big strike against the concept if I were using it badly.
I’m trying to figure out if there’s some underlying crux here, and the part that gets closest to it is maybe:
I think Richard makes an important error when he complains about existing activist-ish groups: he compares these groups to an imaginary version of the activist group which doesn’t make any mistakes. Richard seems to see all mistakes made by activist groups as unforced and indicative of deep problems or malice.
I don’t know how you feel about the concept of Moloch, but I think you could probably have written a pretty similar essay about that concept. In each individual case you could characterize a coordination failure as just an “ordinary failure”, rather than a manifestation of the larger pattern that constitutes Moloch. And indeed your paragraph above is strikingly similar to my own critique of the concept of Moloch, which basically argues that Scott is comparing existing coordination failures to an imaginary world which has perfect coordination. I’ve also made similar critiques of Eliezer’s concept of “civilizational inadequacy” as measuring down from perfection.
I think that the synthesis here is that neither pessimization nor Moloch nor “civilizational inadequacy” should be treated as sufficiently load-bearing that they should tell you what to do directly. In some sense all of these create awayness motivations: don’t pessimize, don’t be inadequate, don’t let Moloch win. But as Malcolm Ocean points out, awayness motivations are very bad for steering. If your guiding principle is not to be inadequate, then you will probably not dream very big. If your guiding principle is not to pessimize, then people will probably just throw accusations of pessimization at each other until everything collapses into a big mess.
That’s why I ended the post by talking about virtue ethics, and how it can be construed as a technology for avoiding pessimization. I want to end up in a place where people almost never say to each other “stop pessimizing”, they instead say “be virtuous”. But in order to argue for virtues as the solution to pessimization/the way to build the “imaginary version” of groups which don’t make such unforced errors, I need to first point at one of the big problems they’re trying to solve. It’s also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
I’m not 100% sure that this is the right synthesis, and will need to muse on it more, but I appreciate your push to get this clearer in my head (and on LessWrong).
Lastly, at risk of turning this political, the one thing I’ll say about the “support Hamas” stuff is that there’s a spectrum of what counts as “support”, from literally signing up to fight for them to cheering them on to dogwhistling in support of them to just pushing for some of the same goals that they do to failing to condemn them. My contention is that there are important ways in which Hamas’ lack of alignment with western values leads to more western support for them—e.g. the wave of pro-Palestine rallies immediately after they killed many civilians—which is what makes this an example of pessimization. Of course this is a dangerous kind of accusation because there’s a lot of wiggle room in exactly what we mean by “lack of alignment”, and distinctions between supporting Hamas itself vs supporting associated causes. I personally still think the effect is stark enough that my core point was correct, but I should have phrased it more carefully. (Note: I edited this paragraph a few mins after writing it, because the original version wasn’t very thoughtful.)
Thanks for the response! I feel like I understand your position quite a lot better now, and see the place where “pessimization” fits into a mental model a lot better. My version of your synthesis is something like as follows:
“Activists often work in very adversarial domains, in the obvious and non-obvious ways. If they screw up even a little bit, this can make them much, much less effective, causing more harm to their cause than a similarly-sized screwup would in most non-adversarial domains. This process is important enough to need a special name, even though individual cases of it might be quite different. Once we’ve named it, we can see if there are any robust solutions to all or most of those cases.”
Based on this, I currently think of the concept of pessimization as making a prediction about the world: virtue ethics (or something like it) is a good solution to most or all of these problems, which means the problems themselves shared something in common, which is worthy of a label.
It’s also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
This is absolutely intriguing do you have anything more written about this publicly?
Ramble
At the risk of being too much of an “Everything Is Connected” guy, I think there’s a connection between the following (italicized items are things I’ve thought about or worked on recently)
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
It doesn’t quite fit, which is a little annoying. But the level one vs level two security mindset thing comes to mind when I think about deontology vs virtue ethics.
Deontology seeks to find specific rules to constrain humans away from particular failure modes “Don’t overthrow your democratically elected leader in a bloody revolution even if the glorious leader really would be a good god-emperor” and the like.
Perhaps a good version of virtue ethics would work like the true security mindset, although I don’t know whether the resulting version of virtue ethics would look much like what the Athenians were talking about.
My version of your synthesis is something like as follows:
This is closer; I’d just add that I don’t think activism is too different from other high-stakes domains, and I discuss it mainly because people seem to take activists more at face value than other entities. For example, I expect that law firms often pessimize their stated values (of e.g. respect for the law) but this surprises people less. More generally, when you experience a lot of internal conflict, every domain is an adversarial domain (against parts of yourself).
I think there’s a connection between the following
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
I think you lost the italics somewhere. Some comments on these analogies:
The idea that some types of cognition are “fallacies” or “biases” and others aren’t does seem like a pretty deontological way of thinking about the world, insofar as it implicitly claims that you can reason well just by avoiding fallacies and biases.
As the third step in this analogy, instead of “learning to recognize a robust world model”, I’d put “carrying out internal compromises”, i.e. figuring out how to reduce conflict between heuristics and naive reasoning and other internal subagents.
Re the passwords analogy: yes, deontology and virtue ethics are adversarially robust in a way that utilitarianism isn’t. But also, virtue ethics is scalable in a way that deontology isn’t, which seems well-captured by the distinction between storing passwords on secure disks vs salting and hashing them.
Thanks for engaging! There’s a lot here I agree with—in particular, the concept of pessimization does seem like a dangerous one which could be used to demoralize people. I also think psychoanalyzing me is fair game here, and that it would be a big strike against the concept if I were using it badly.
I’m trying to figure out if there’s some underlying crux here, and the part that gets closest to it is maybe:
I don’t know how you feel about the concept of Moloch, but I think you could probably have written a pretty similar essay about that concept. In each individual case you could characterize a coordination failure as just an “ordinary failure”, rather than a manifestation of the larger pattern that constitutes Moloch. And indeed your paragraph above is strikingly similar to my own critique of the concept of Moloch, which basically argues that Scott is comparing existing coordination failures to an imaginary world which has perfect coordination. I’ve also made similar critiques of Eliezer’s concept of “civilizational inadequacy” as measuring down from perfection.
I think that the synthesis here is that neither pessimization nor Moloch nor “civilizational inadequacy” should be treated as sufficiently load-bearing that they should tell you what to do directly. In some sense all of these create awayness motivations: don’t pessimize, don’t be inadequate, don’t let Moloch win. But as Malcolm Ocean points out, awayness motivations are very bad for steering. If your guiding principle is not to be inadequate, then you will probably not dream very big. If your guiding principle is not to pessimize, then people will probably just throw accusations of pessimization at each other until everything collapses into a big mess.
That’s why I ended the post by talking about virtue ethics, and how it can be construed as a technology for avoiding pessimization. I want to end up in a place where people almost never say to each other “stop pessimizing”, they instead say “be virtuous”. But in order to argue for virtues as the solution to pessimization/the way to build the “imaginary version” of groups which don’t make such unforced errors, I need to first point at one of the big problems they’re trying to solve. It’s also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
I’m not 100% sure that this is the right synthesis, and will need to muse on it more, but I appreciate your push to get this clearer in my head (and on LessWrong).
Lastly, at risk of turning this political, the one thing I’ll say about the “support Hamas” stuff is that there’s a spectrum of what counts as “support”, from literally signing up to fight for them to cheering them on to dogwhistling in support of them to just pushing for some of the same goals that they do to failing to condemn them. My contention is that there are important ways in which Hamas’ lack of alignment with western values leads to more western support for them—e.g. the wave of pro-Palestine rallies immediately after they killed many civilians—which is what makes this an example of pessimization. Of course this is a dangerous kind of accusation because there’s a lot of wiggle room in exactly what we mean by “lack of alignment”, and distinctions between supporting Hamas itself vs supporting associated causes. I personally still think the effect is stark enough that my core point was correct, but I should have phrased it more carefully. (Note: I edited this paragraph a few mins after writing it, because the original version wasn’t very thoughtful.)
Thanks for the response! I feel like I understand your position quite a lot better now, and see the place where “pessimization” fits into a mental model a lot better. My version of your synthesis is something like as follows:
“Activists often work in very adversarial domains, in the obvious and non-obvious ways. If they screw up even a little bit, this can make them much, much less effective, causing more harm to their cause than a similarly-sized screwup would in most non-adversarial domains. This process is important enough to need a special name, even though individual cases of it might be quite different. Once we’ve named it, we can see if there are any robust solutions to all or most of those cases.”
Based on this, I currently think of the concept of pessimization as making a prediction about the world: virtue ethics (or something like it) is a good solution to most or all of these problems, which means the problems themselves shared something in common, which is worthy of a label.
This is absolutely intriguing do you have anything more written about this publicly?
Ramble
At the risk of being too much of an “Everything Is Connected” guy, I think there’s a connection between the following (italicized items are things I’ve thought about or worked on recently)
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
It doesn’t quite fit, which is a little annoying. But the level one vs level two security mindset thing comes to mind when I think about deontology vs virtue ethics.
Deontology seeks to find specific rules to constrain humans away from particular failure modes “Don’t overthrow your democratically elected leader in a bloody revolution even if the glorious leader really would be a good god-emperor” and the like.
Perhaps a good version of virtue ethics would work like the true security mindset, although I don’t know whether the resulting version of virtue ethics would look much like what the Athenians were talking about.
This is closer; I’d just add that I don’t think activism is too different from other high-stakes domains, and I discuss it mainly because people seem to take activists more at face value than other entities. For example, I expect that law firms often pessimize their stated values (of e.g. respect for the law) but this surprises people less. More generally, when you experience a lot of internal conflict, every domain is an adversarial domain (against parts of yourself).
I think you lost the italics somewhere. Some comments on these analogies:
The idea that some types of cognition are “fallacies” or “biases” and others aren’t does seem like a pretty deontological way of thinking about the world, insofar as it implicitly claims that you can reason well just by avoiding fallacies and biases.
As the third step in this analogy, instead of “learning to recognize a robust world model”, I’d put “carrying out internal compromises”, i.e. figuring out how to reduce conflict between heuristics and naive reasoning and other internal subagents.
Re the passwords analogy: yes, deontology and virtue ethics are adversarially robust in a way that utilitarianism isn’t. But also, virtue ethics is scalable in a way that deontology isn’t, which seems well-captured by the distinction between storing passwords on secure disks vs salting and hashing them.