I suggest minting a new word, for people who have the effects of malicious behavior, whether it’s intentional or not.
Why only malicious behavior? It seems like the relevant idea is more general: oftentimes we care about what outcomes a pattern of behavior looks optimized to achieve in the world, not about the person’s conscious subjective verbal narrative. (Separately from whether we think those outcomes are good or bad.)
Previously, I had suggested “algorithmic” intent, as contrasted to “conscious” intent. Claims about algorithmic intent correspond to predictions about how the behavior responds to interventions. Mistakes that don’t repeat themselves when corrected are probably “honest mistakes.” “Mistakes” that resist correction, that systematically steer the future in a way that benefits the actor, are probably algorithmically intentional.
“Mistakes” that resist correction, that systematically steer the future in a way that benefits the actor, are probably algorithmically intentional.
is benefits the actor here load-bearing for you (as opposed to just predictably bad for others)? I can think of examples of situations that rarely benefit the actor but seem unlikely to be talked out of (e.g. temper tantrums at the workplace are rarely selfishly positive in professional Western contexts).
Sorry, not load-bearing; I think “steering the future” was the important part of that sentence.
Although in the case of tantrums, I think the game-theoretic logic is pretty clear: if I predictably make a fuss when I don’t get my way, then people who don’t want me to make a fuss are more likely to let me get my way (to a point). The fact that tantrums don’t benefit the actor when they happen, isn’t itself enough to show that they’re not being used to successfully extort concessions to make them happen less often. If it doesn’t work in the modern workplace, it probably worked in the environment of evolutionary adaptedness.
Why only malicious behavior? It seems like the relevant idea is more general: oftentimes we care about what outcomes a pattern of behavior looks optimized to achieve in the world, not about the person’s conscious subjective verbal narrative. (Separately from whether we think those outcomes are good or bad.)
Previously, I had suggested “algorithmic” intent, as contrasted to “conscious” intent. Claims about algorithmic intent correspond to predictions about how the behavior responds to interventions. Mistakes that don’t repeat themselves when corrected are probably “honest mistakes.” “Mistakes” that resist correction, that systematically steer the future in a way that benefits the actor, are probably algorithmically intentional.
is benefits the actor here load-bearing for you (as opposed to just predictably bad for others)? I can think of examples of situations that rarely benefit the actor but seem unlikely to be talked out of (e.g. temper tantrums at the workplace are rarely selfishly positive in professional Western contexts).
Sorry, not load-bearing; I think “steering the future” was the important part of that sentence.
Although in the case of tantrums, I think the game-theoretic logic is pretty clear: if I predictably make a fuss when I don’t get my way, then people who don’t want me to make a fuss are more likely to let me get my way (to a point). The fact that tantrums don’t benefit the actor when they happen, isn’t itself enough to show that they’re not being used to successfully extort concessions to make them happen less often. If it doesn’t work in the modern workplace, it probably worked in the environment of evolutionary adaptedness.
Sometimes also tantrums work in the training distribution of childhood and don’t work in the deployment environment of professional work.