What should be reified?

Link post

I’ve said elsewhere, I think the sticking point of a lot of important questions is just this question: what should be reified? Another way to say this is: what are our ontological and axiological commitments? This is basically a cold take in many of my circles, but certainly not elsewhere, and I’d like to make this distinction as glaringly obvious as it is to me. Hopefully this post will be become trite and annoying to more people :).

Mathematics has the fortunate property of using axioms which are essentially solipsistic. In some sense they simply don’t permit varying interpretations; alternative models belong to a different genus or theory or whatever. The further we get from math, and the larger and more complex the objects we want to study, the more destructive or lossy our abstractions are.

Models and frameworks exist for various purposes, they provide gripping points for interacting with the world. The abstractions we choose are where our existing intuitions, goals, etc leak into the model. This also isn’t about explicit models necessarily, this question is underneath lots of intuitions, “embodied knowings”, the whole lot.

Reifications make things easier to think about because they reduce dimensionality, or at least salient dimensionality. Maybe a goofy way to express the title of this post would be: “what subspace can we project into which preserves the structures we’re concerned with?” Perhaps, better: “does this reification reduce complexity appropriately? And then, surely, we should also ask: “what structures exactly are we concerned with?”—which brings us back to the title of the post.

Part of the question then is also to ask, what should be dereified? This is part and parcel of the question. A lot of frameworks rely on dereifying large aspects of the domain, sometimes explicitly, and sometimes simply leaving them out of the discussion.

Ok dude, thanks for your philosophy 101 post.

No seriously, though. I think these distinctions are largely absent from our discourse, and in my experience, people are insensitive enough to them (or else just blended enough with their reifications) that they can’t step out of them to investigate the abstractions they’re using. Perhaps, part of the trouble here is that this process is relatively phenomenologically subtle. If someone is insensitive to the process by which they apprehend the world, then they take the output of that process as real, and are mostly unable to investigate it. Or maybe I just suck at broaching these topics.

Again, I’ll try to be clear: these questions are upstream of the rest of the decisions we make. The “blooming, buzzing confusion” has to be reduced to manipulable structures, on the basis of which we make the rest of our determinations. Again, this even applies to judgements of value, regardless of whether or what frameworks we use to make them.

And then, ok, getting closer to the actual discourse: how can we even do explicit tradeoff calculations? A number of attempts have been made to construct reifications to use as a basis for such calculations. I’m not as familiar with the literature as I would like to be but my impression is that these are basically lacking in any useful normative or descriptive power, at any level outside of narrowly defined abstract games. Nobody (?) is productively doing EV calculations, afaik, etc. etc.

(Okay, to be fair, some people are currently trying, so I suppose the above is already making judgements about the quality of their models and their choices. In this respect maybe I’m less confident. The main point I can certainly make here is that the choice and determination of reifications is upstream of these models.)

This isn’t even mostly about utilitarianism, either. I see this question as fundamental, but still latent, in questions all over the place: sex, justice, freedom, correctness (of a few different kinds) come easily to mind.

But nonetheless we have to make and use lossy representations! No choice is still a choice; at some point various decisions get made, at personal, organizational, civilization levels, etc. The point here is not that reifications are bad, but to make salient this distinction so we can discuss this process more clearly.