Rounding Someone Off

Epistemic status: Exploratory and not profoundly confident. This is a quickly written-up presentation of an idea I’ve picked up.

Differing Models of the Same Person

To the left we see Brown’s intentional interpretation of Ella; to the right, Jones’s interpretation. Since these are intentional interpretations, the pixels or data points represent… personal cognitive idiosyncracies (e.g., “She thinks she should get her queen out early”)…

Notice that here the disagreements can be substantial—at least before the fact: when Brown and Jones make a series of predictive bets, they will not always make the same bet. They may often disagree on what, according to their chosen pattern, will happen next. To take a dramatic case, Brown may predict that Ella will decide to kill herself; Jones may disagree. This is not a trivial disagreement of prediction, and in principle this momentous difference may emerge in spite of the overall consonance of the two interpretations.

Suppose, then, that Brown and Jones make a series of predictions of Ella’s behavior, based on their rival interpretations. Consider the different categories that compose their track records. First, there are the occasions where they agree and are right. Both systems look good from the vantage point of these successes. Second, there are the occasions where they agree and are wrong. Both chalk it up to noise, take their budgeted loss and move on to the next case. But there will also be the occasions where they disagree, where their systems make different predictions, and in these cases sometimes (but not always) one will win and the other lose. (In the real world, predictions are not always from among binary alternatives, so in many cases they will disagree and both be wrong.) When one wins and the other loses, it will look to the myopic observer as if one “theory” has scored a serious point against the other, but when one recognizes the possibility that both may chalk up such victories, and that there may be no pattern in the victories which permits either one to improve his theory by making adjustments, one sees that local triumphs may be insufficient to provide any ground in reality for declaring one account a closer approximation of the truth.

Now, some might think this situation is always unstable; eventually one interpretation is bound to ramify better to new cases, or be deducible from some larger scheme covering other data, etc. That might be true in many cases, but… radical indeterminacy [may also be] a genuine and stable possibility.

This indeterminacy will be most striking in such cases as the imagined disagreement over Ella’s suicidal mindset. If Ella does kill herself, is Brown shown to have clearly had the better intentional interpretation? Not necessarily. When Jones chalks up his scheme’s failure in this instance to a bit of noise, this is no more ad hoc or unprincipled than the occasions when Brown was wrong about whether Ella would order the steak not the lobster, and chalked those misses up to noise. This is not at all to say that an interpretation can never be shown to be just wrong; there is plenty of leverage within the principles of intentional interpretation to refute particular hypotheses—for instance, by forcing their defense down the path of Pickwickian explosion (“You see, she didn’t believe the gun was loaded because she thought that those bullet-shaped things were chocolates wrapped in foil, which was just a fantasy that occurred to her because . . . .”). It is to say that there could be two interpretation schemes that were reliable and compact predictors over the long run, but that nevertheless disagreed on crucial cases.

It might seem that in a case as momentous as Ella’s intention to kill herself, a closer examination of the details just prior to the fatal moment (if not at an earlier stage) would have to provide additional support for Brown’s interpretation at the expense of Jones’s interpretation. After all, there would be at least a few seconds—or a few hundred milliseconds—during which Ella’s decision to pull the trigger got implemented, and during that brief period, at least, the evidence would swing sharply in favor of Brown’s interpretation. That is no doubt true, and it is perhaps true that had one gone into enough detail earlier, all this last-second detail could have been predicted—but to have gone into those details earlier would have been to drop down from the intentional stance to the [neurological frame]. From the intentional stance, these determining considerations would have been invisible to both Brown and Jones, who were both prepared to smear over such details as noise in the interests of more practical prediction. Both interpreters concede that they will make false predictions, and moreover, that when they make false predictions there are apt to be harbingers of misprediction in the moments during which the de’nouement unfolds. Such a brief swing does not constitute refutation of the interpretation, any more than the upcoming misprediction of behavior does (1991, pp. 47-9).

--Daniel Dennett, “Real Patterns”

Consummately Political People

We all know—or at the very least know of—people who are incorrigibly ideological, in one direction or another. People like this are very hard to get through to! You entertain the implications of their model, and instead of them similarly playing with those implications they seemingly sentiment-check whatever sentence you generate against their tribal affiliation, and respond with either affirmation or frostiness.

These people hail from the epistemological land of low coordination! In those dark plains of abused, weary collective epistemology, all communication packets are secretly attempts to dunk on your ingroup. So, you can’t just unzip any piece of communication someone in a conversation sends your way—that would be (hopelessly) naïve. Instead, you presuppose bad faith among people with differing or unknown tribal affiliations, and only ever suppose good faith on special occasions in which you’ve carefully tribally vetted everyone you’re talking to.

But look: my epistemological milieu is a high-coordination collective-epistemology—a very different realm! In my culture, we mostly stick to saying object-level things and playing with the implications of our claims. We try hard not to wrap our statements in ingroup markers to dunk on the hated outgroup, because that will eventuate in cycles of retaliation and a world in which all statements are attacks along some tribal status-front, not information packets that can be used to build things together and reap the positive-sum benefits.

From this cultural point of view, it’s striking that I best model consummately political people by thinking about their tribal affiliation, and not their model of the territory.

Rounding Off

Say you can model a person in two ways. A la the Dennett article above, you can opt for a high-rez model of their psyche, the sort of thing you pick up by being close friends with someone, or opt for a low-rez stereotype of that person, maybe rounding them off to a tribal affiliation plus a fistful of personality traits, hobbyhorses, and sore spots.

The two models genuinely differ, as much as the friend you’ve come to know and love does from the first impression they can give off at parties.

Sometimes, the stereotype can be the winning modeling strategy! The stereotype is (by its very nature) almost certainly less accurate. But it’s so much cognitively cheaper! You need much less success to make the stereotype a worthwhile model of the territory than you do to justify the cognitive investment needed to get to know someone in detail.[1] Moreover, people occasionally do you the epistemic favor of deferring to their own stereotype of themselves. When someone is conflicted about “who they take themselves to be,” and ultimately lets their stereotype rather than their world-model set their bottom line, they’re subsidizing stereotypes of themselves.[2]

Argumentative Charity Considered Harmful

When your high-rez model of someone is nearly a worthwhile epistemological investment, it feels virtuous to lean towards wielding the high-rez model of them instead of the low-rez stereotype.[3] We call this “argumentative charity.”

If you’re in the business of accurately anticipating all and only the real occurrences that lie in your future, though, you might see the alpha in rounding off as much as you can get away with. Those objects in the outside world that stubbornly resist your stereotyping are the objects on which you should then spend your complexity capital, this strategy suggests.

  1. ^

    Note that this trade-off is presented from the purely epistemic point-of-view—aims in life that trade off against maximizing predictive accuracy divided by model complexity are all ignored here.

  2. ^

    They might also be subsidizing coordination in their tribe, another complication we’ll flat-footedly ignore here.

  3. ^

    Yet another omitted detail: in high-coordination communities, the virtue of argumentative charity may be a form of coordinating to allow for compelling arguments for a priori bad-faith-seeming claims.

No comments.