Thanks for the response! I feel like I understand your position quite a lot better now, and see the place where “pessimization” fits into a mental model a lot better. My version of your synthesis is something like as follows:
“Activists often work in very adversarial domains, in the obvious and non-obvious ways. If they screw up even a little bit, this can make them much, much less effective, causing more harm to their cause than a similarly-sized screwup would in most non-adversarial domains. This process is important enough to need a special name, even though individual cases of it might be quite different. Once we’ve named it, we can see if there are any robust solutions to all or most of those cases.”
Based on this, I currently think of the concept of pessimization as making a prediction about the world: virtue ethics (or something like it) is a good solution to most or all of these problems, which means the problems themselves shared something in common, which is worthy of a label.
It’s also worth noting that a major research goal of mine is to pin down mechanisms of pessimization more formally and precisely, and if I fail then that should count as a significant strike against the concept.
This is absolutely intriguing do you have anything more written about this publicly?
Ramble
At the risk of being too much of an “Everything Is Connected” guy, I think there’s a connection between the following (italicized items are things I’ve thought about or worked on recently)
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
It doesn’t quite fit, which is a little annoying. But the level one vs level two security mindset thing comes to mind when I think about deontology vs virtue ethics.
Deontology seeks to find specific rules to constrain humans away from particular failure modes “Don’t overthrow your democratically elected leader in a bloody revolution even if the glorious leader really would be a good god-emperor” and the like.
Perhaps a good version of virtue ethics would work like the true security mindset, although I don’t know whether the resulting version of virtue ethics would look much like what the Athenians were talking about.
My version of your synthesis is something like as follows:
This is closer; I’d just add that I don’t think activism is too different from other high-stakes domains, and I discuss it mainly because people seem to take activists more at face value than other entities. For example, I expect that law firms often pessimize their stated values (of e.g. respect for the law) but this surprises people less. More generally, when you experience a lot of internal conflict, every domain is an adversarial domain (against parts of yourself).
I think there’s a connection between the following
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
I think you lost the italics somewhere. Some comments on these analogies:
The idea that some types of cognition are “fallacies” or “biases” and others aren’t does seem like a pretty deontological way of thinking about the world, insofar as it implicitly claims that you can reason well just by avoiding fallacies and biases.
As the third step in this analogy, instead of “learning to recognize a robust world model”, I’d put “carrying out internal compromises”, i.e. figuring out how to reduce conflict between heuristics and naive reasoning and other internal subagents.
Re the passwords analogy: yes, deontology and virtue ethics are adversarially robust in a way that utilitarianism isn’t. But also, virtue ethics is scalable in a way that deontology isn’t, which seems well-captured by the distinction between storing passwords on secure disks vs salting and hashing them.
Thanks for the response! I feel like I understand your position quite a lot better now, and see the place where “pessimization” fits into a mental model a lot better. My version of your synthesis is something like as follows:
“Activists often work in very adversarial domains, in the obvious and non-obvious ways. If they screw up even a little bit, this can make them much, much less effective, causing more harm to their cause than a similarly-sized screwup would in most non-adversarial domains. This process is important enough to need a special name, even though individual cases of it might be quite different. Once we’ve named it, we can see if there are any robust solutions to all or most of those cases.”
Based on this, I currently think of the concept of pessimization as making a prediction about the world: virtue ethics (or something like it) is a good solution to most or all of these problems, which means the problems themselves shared something in common, which is worthy of a label.
This is absolutely intriguing do you have anything more written about this publicly?
Ramble
At the risk of being too much of an “Everything Is Connected” guy, I think there’s a connection between the following (italicized items are things I’ve thought about or worked on recently)
storing passwords in plain text|encrypting passwords on a secure part of the disk|salting and hashing passwords
naive reasoning|studying fallacies and biases|learning to recognise a robust world model
utilitarianism|deontology|virtue ethics
It doesn’t quite fit, which is a little annoying. But the level one vs level two security mindset thing comes to mind when I think about deontology vs virtue ethics.
Deontology seeks to find specific rules to constrain humans away from particular failure modes “Don’t overthrow your democratically elected leader in a bloody revolution even if the glorious leader really would be a good god-emperor” and the like.
Perhaps a good version of virtue ethics would work like the true security mindset, although I don’t know whether the resulting version of virtue ethics would look much like what the Athenians were talking about.
This is closer; I’d just add that I don’t think activism is too different from other high-stakes domains, and I discuss it mainly because people seem to take activists more at face value than other entities. For example, I expect that law firms often pessimize their stated values (of e.g. respect for the law) but this surprises people less. More generally, when you experience a lot of internal conflict, every domain is an adversarial domain (against parts of yourself).
I think you lost the italics somewhere. Some comments on these analogies:
The idea that some types of cognition are “fallacies” or “biases” and others aren’t does seem like a pretty deontological way of thinking about the world, insofar as it implicitly claims that you can reason well just by avoiding fallacies and biases.
As the third step in this analogy, instead of “learning to recognize a robust world model”, I’d put “carrying out internal compromises”, i.e. figuring out how to reduce conflict between heuristics and naive reasoning and other internal subagents.
Re the passwords analogy: yes, deontology and virtue ethics are adversarially robust in a way that utilitarianism isn’t. But also, virtue ethics is scalable in a way that deontology isn’t, which seems well-captured by the distinction between storing passwords on secure disks vs salting and hashing them.