Can you give an example of some decision or model that does it well? I don’t disagree with the framework of recognizing that all of this is human-level modeling, and not actually encoded in the real world, but I don’t know that I get anything out of questioning or changing most of the “default” models of human experience.
Note that when you say “reification”, my mind replaces it with “model”, “map”, or “focus”. If you mean something else, the source of my confusion is clear.
Edit: one source of my confusion may be the use of “reified” as a passive verb, which happens to ideas without specifying the actor. I sometimes trip up on “model” in the same way, wanting to clarify whether it’s an agent modeling some prediction, or a model that exists in a vacuum which could be used to make a prediction. Relatedly, the use of “we” without acknowledging the variance among humans or whether it’s a recommendation to others or an observation of yourself.
Yeah, I’m working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
Note that when you say “reification”, my mind replaces it with “model”, “map”, or “focus”. If you mean something else, the source of my confusion is clear.
“Focus” is the best among these but isn’t great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we’ve already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
one source of my confusion may be the use of “reified” as a passive verb, which happens to ideas without specifying the actor.
This may just be sloppiness on my part. I usually mean something like, “a idea, as held by a person, which that person has reified.” Compare to eg. “a loved one” or something like that.
Can you give an example of some decision or model that does it well? I don’t disagree with the framework of recognizing that all of this is human-level modeling, and not actually encoded in the real world, but I don’t know that I get anything out of questioning or changing most of the “default” models of human experience.
Note that when you say “reification”, my mind replaces it with “model”, “map”, or “focus”. If you mean something else, the source of my confusion is clear.
Edit: one source of my confusion may be the use of “reified” as a passive verb, which happens to ideas without specifying the actor. I sometimes trip up on “model” in the same way, wanting to clarify whether it’s an agent modeling some prediction, or a model that exists in a vacuum which could be used to make a prediction. Relatedly, the use of “we” without acknowledging the variance among humans or whether it’s a recommendation to others or an observation of yourself.
Yeah, I’m working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
“Focus” is the best among these but isn’t great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we’ve already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
This may just be sloppiness on my part. I usually mean something like, “a idea, as held by a person, which that person has reified.” Compare to eg. “a loved one” or something like that.