Yeah, I’m working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
Note that when you say “reification”, my mind replaces it with “model”, “map”, or “focus”. If you mean something else, the source of my confusion is clear.
“Focus” is the best among these but isn’t great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we’ve already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
one source of my confusion may be the use of “reified” as a passive verb, which happens to ideas without specifying the actor.
This may just be sloppiness on my part. I usually mean something like, “a idea, as held by a person, which that person has reified.” Compare to eg. “a loved one” or something like that.
Yeah, I’m working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
“Focus” is the best among these but isn’t great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we’ve already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
This may just be sloppiness on my part. I usually mean something like, “a idea, as held by a person, which that person has reified.” Compare to eg. “a loved one” or something like that.