It’s true that you usually have some additional causal levers, but none of them are the exact same as be the kind of person who does X.
Not sure I understand. It seems like “being the kind of person who does X” is a habit you cultivate over time, which causally influences how people react to you. Seems pretty analogous to the job candidate case.
if CDT agents often modify themselves to become an LDT/FDT agent then it would broadly seem accurate to say that CDT is getting outcompeted
See my replies to interstice’s comment—I don’t think “modifying themselves to become an LDT/FDT agent” is what’s going on, at least, there doesn’t seem to be pressure to modify themselves to do all the sorts of things LDT/FDT agents do. They come apart in cases where the modification doesn’t causally influence another agent’s behavior.
(This seems analogous to claims that consequentialism is self-defeating because the “consequentialist” decision procedure leads to worse consequences on average. I don’t buy those claims, because consequentialism is a criterion of rightness, and there are clearly some cases where doing the non-consequentialist thing is a terrible idea by consequentialist lights even accounting for signaling value, etc. It seems misleading to call an agent a non-consequentialist if everything they do is ultimately optimizing for achieving good consequences ex ante, even if they adhere to some rules that have a deontological vibe and in a given situation may be ex post suboptimal.)
Attempting to cultivate a habit is not the same as directly being that kind of person. The distinction may seem slight, but it’s worth keeping track of.
Thanks!
Not sure I understand. It seems like “being the kind of person who does X” is a habit you cultivate over time, which causally influences how people react to you. Seems pretty analogous to the job candidate case.
See my replies to interstice’s comment—I don’t think “modifying themselves to become an LDT/FDT agent” is what’s going on, at least, there doesn’t seem to be pressure to modify themselves to do all the sorts of things LDT/FDT agents do. They come apart in cases where the modification doesn’t causally influence another agent’s behavior.
(This seems analogous to claims that consequentialism is self-defeating because the “consequentialist” decision procedure leads to worse consequences on average. I don’t buy those claims, because consequentialism is a criterion of rightness, and there are clearly some cases where doing the non-consequentialist thing is a terrible idea by consequentialist lights even accounting for signaling value, etc. It seems misleading to call an agent a non-consequentialist if everything they do is ultimately optimizing for achieving good consequences ex ante, even if they adhere to some rules that have a deontological vibe and in a given situation may be ex post suboptimal.)
Attempting to cultivate a habit is not the same as directly being that kind of person. The distinction may seem slight, but it’s worth keeping track of.