I am not totally sure that I disagree with you, but I would not say that agency is subjective and I’m going to argue against that here.
Clarifying “subjectivity.” I’m not sure I disagree because of this sentence “there’s a certain structure out in the world which people recognize as X, because recognizing it as X is convergently instrumental for a wide variety of goals.” I’m guessing that where you’re going with this is that the reason it’s so instrumentally convergent is because there is in fact something “out there” that deserves to be labeled as X, irrespective of the minds looking at it? Like, the fact that we all agree that oranges are things is because oranges basically are things, e.g., they contain the molecules we need for energy, have rinds, and so on, and these are facts about the territory; denying that would be bad for a wide variety of goals becauseyou’d be missing out on something instrumentally useful for many goals, where, importantly, “usefulness” is at least in part a territory property, e.g., whether or not the orange contains molecules that we can metabolize. If this is what you mean, then we don’t disagree. But I also wouldn’t call an orange subjective, in the same way I wouldn’t call agency subjective. More on that later.
People modeling things differently does not necessarily imply subjectivity. It seems like your main point about agents being subjective is that “different people model different things as agents at different times.” This doesn’t seem sufficient to me. Like, people modeled heat as different things before we knew what it was, e.g., there was a time when people were arguing about whether it was a motion, a gas, or a liquid. But heat turned out to be “objective,” i.e., something which seems to exist irrespective of how we model it. Likewise, before Darwin there was some confusion over what different dog breeds were: many people considered them to be different “varieties” which was basically just a word for “not different species, but still kind of different.” Darwin claimed, and I believe him, that people would give different answers about whether these were different species or merely different varieties based on context and their history (e.g., if a naturalist had never seen dogs, then they’d probably call them different species, if they had, they’d call them different varieties). As it turns out, there’s an underlying “objective” thing here, which is how much their genomes differ from each other (I think? Not an evolutionary biologist :p). In any case, it seems to me that it is often the case that before scientific concepts are totally sussed out there is disagreement over how to model the thing they are pointing at, but that this doesn’t on its own imply that it’s inherently subjective.
A potential crux. There is a further thing you might mean here, what Dennett calls “the indeterminacy of interpretation,” which is that there is just no fact of the matter to what is agentic. Like, people might have disagreed about what heat was for a while, but it turned out that heat is more-or-less objective. The concept “hot,” on the other hand, is more subjective: just a property of how the neurons in a particular body/mind are tuned. In other words, the answer to whether something is hot is basically just “mu”—there is no fact of the matter about it. I am guessing that you think agency is of the latter type; I think it is of the former, i.e., I think we just haven’t pinned down the concepts in agency well enough to all agree on them yet, but that there is something “actually there” which we are pointing at. This might be our crux?
Abstractions are not all subjective? I am generally pretty confused by the stance that all “high-level abstractions” are subjective (although I don’t know what you mean by “high-level.”) I think (based on citing Jaynes) that you are saying something like “abstractions are reflections of our own ignorance.” E.g., we talk about temperature as some abstract thing because we are uncertain about the particular microstate that underlies it. But it seems to me that if you take this stance then you have to call basically everything subjective, e.g., an orange sitting right in front of me is subjective because I am ignorant of its exact atomic makeup. This seems a little weird to me, like oranges wouldn’t go away if we became fully certain about them? Likewise, I don’t think agency goes away if we become less ignorant of it.
Agents are more like “oranges” than they are like “hot.” To me, agents seem clearly in the “orange” category, rather than the “hot” category. Sure, we might currently call different things agents at different times, but to me it seems clear that there is something “real” there that exists aside from our perceptual machinery/interpretation layer. Like, the fact that agents consume order (negentropy) from their environment to spend on the order they care about is one such example of something “objective-ish” about agents, i.e., a real regularity happening in the territory, not just relative to our models of it.
Why do we disagree about what’s agentic, then? On my model, part of the reason that people vary on what they call agentic is because (I suspect) “agency” is not going to be a coherent concept in itself, but rather break out into multiple concepts which all contribute to our sense of it, such that many things we currently consider to be edge cases can be explained by one or a few factors being missing (or diminished). Likewise, I do expect that it is not entirely categorical, but that things can have more or less of it, and have more or less at different times (i.e., that a particular human varies in its ‘agent-ness’ over time). Neither of these seem incongruent to me with the idea that it’s objective-ish, just that we haven’t clarified what we mean by agency yet.
I’m guessing that where you’re going with this is that the reason it’s so instrumentally convergent is because there is in fact something “out there” that deserves to be labeled as X, irrespective of the minds looking at it? Like, the fact that we all agree that oranges are things is because oranges basically are things, e.g., they contain the molecules we need for energy, have rinds, and so on, and these are facts about the territory; denying that would be bad for a wide variety of goals becauseyou’d be missing out on something instrumentally useful for many goals, where, importantly, “usefulness” is at least in part a territory property, e.g., whether or not the orange contains molecules that we can metabolize. If this is what you mean, then we don’t disagree.
Yup, exactly. And good explanations, this is a great comment all around.
In the final paragraph, I’m uncertain if you are thinking about “agency” being broken into components which make up the whole concept, or thinking about the category being split into different classes of things, some of which may have intersecting examples. (or both?) I suspect both would be helpful. Agency can be described in terms of components like measurement/sensory, calculations, modeling, planning, comparisons to setpoints/goals, taking actions. Probably not that exact set, but then examples of agent like things could naturally be compared on each component, and should fall into different classes. Exploring the classes I suspect would inform the set of components and the general notion of “agency”.
I guess to get work on that done it would be useful to have a list of prospective agent components, a set of examples of agent shaped things, and then of course to describe each agent in terms of the components. What I’m describing, does it sound useful? Do you know of any projects doing this kind of thing?
On the topic of map-territory correspondence, (is there a more concise name for that?) I quite like your analogies, running with them a bit, it seems like there are maybe 4 categories of map-territory correspondence;
Orange-like: It exists as a natural abstraction in the territory and so shows up on many maps.
Hot-like: It exists as a natural abstraction of a situation. A fire is hot in contrast to the surrounding cold woods. A sunny day is hot in contrast to the cold rainy days that came before it.
Heat-like: A natural abstraction of the natural abstraction of the situation, or alternatively, comparing the temperature of 3, rather than only 2, things. It might be natural to jump straight to the abstraction of a continuum of things being hot or not relative to one another, but it also seems natural to instead not notice homeostasis, and only to categorize the hot and cold in the environment that push you out of homeostasis.
Indeterminate: There is no natural abstraction underneath this thing. People either won’t consistently converge to it, or if they do, it is because they are interacting with other people (so the location could easily shift, since the convergence is to other maps, not to territory), or because of some other mysterious force like happenstance or unexplained crab shape magic.
It feels like “heat-like” might be the only real category in some kind of similarity clusters kind of way, but also “things which are using a measurement proxy to compare the state of reality against a setpoint and taking different actions based on the difference between the measurement result and the setpoint” seems like a specific enough thing when I think about it that you could divide all parts of the universe into being either definitely in or definitely out of that category, which would make it a strong candidate for being a natural abstraction, and I suspect it’s not the only category like that.
I wouldn’t be surprised if there were indeterminate things in shared maps, and in individual maps, but I would be very surprised if there were many examples in shared maps that were due to happenstance instead of being due to convergence of individual happenstance indeterminate things converging during map comparison processes. Also, weirdly, the territory containing map making agents which all mark a particular part of their maps may very well be a natural abstraction, that is, the mark existing at a particular point on the maps might be a real thing, but not the corresponding spot in territory. I’m thinking this is related to a Schelling point or Nash Equilibrium, or maybe also related to human biases. Although, those seem to do more with brain hardware than agent interactions. A better example might be the sound of words: arbitrary, except that they must match the words other people are using.
Unrelated epistemological game; I have a suspicion that for any example of a thing that objectively exists, I can generate an ontology in which it would not. For the example of an orange, I can imagine an ontology in which “seeing an orange”, “picking a fruit”, “carrying food”, and “eating an orange” all exist, but an orange itself outside of those does not. Then, an orange doesn’t contain energy, since an orange doesn’t exist, but “having energy” depends on “eating an orange” which depends on “carrying food” and so on, all without the need to be able to think of an orange as an object. To describe an orange you would need to say [[the thing you are eating when you are][eating an orange]], and it would feel in between concepts in the same way that in our common ontology “eating an orange” feels like the idea between “eating” and “orange”.
I’m not sure if this kind of ontology:
Doesn’t exist because separating verbs from nouns is a natural abstraction that any agent modeling any world would converge to.
Does exist in some culture with some language I’ve never heard of.
Does exist in some subset of the population in a similar way to how some people have aphantasia.
Could theoretically exist, but doesn’t by fluke.
Doesn’t exist because it is not internally consistent in some other way.
I suspect it’s the first, but it doesn’t seem inescapably true, and now I’m wondering if this is a worthwhile thought experiment, or the sort of thing I’m thinking because I’m too sleepy. Alas :-p
I am not totally sure that I disagree with you, but I would not say that agency is subjective and I’m going to argue against that here.
Clarifying “subjectivity.” I’m not sure I disagree because of this sentence “there’s a certain structure out in the world which people recognize as X, because recognizing it as X is convergently instrumental for a wide variety of goals.” I’m guessing that where you’re going with this is that the reason it’s so instrumentally convergent is because there is in fact something “out there” that deserves to be labeled as X, irrespective of the minds looking at it? Like, the fact that we all agree that oranges are things is because oranges basically are things, e.g., they contain the molecules we need for energy, have rinds, and so on, and these are facts about the territory; denying that would be bad for a wide variety of goals because you’d be missing out on something instrumentally useful for many goals, where, importantly, “usefulness” is at least in part a territory property, e.g., whether or not the orange contains molecules that we can metabolize. If this is what you mean, then we don’t disagree. But I also wouldn’t call an orange subjective, in the same way I wouldn’t call agency subjective. More on that later.
People modeling things differently does not necessarily imply subjectivity. It seems like your main point about agents being subjective is that “different people model different things as agents at different times.” This doesn’t seem sufficient to me. Like, people modeled heat as different things before we knew what it was, e.g., there was a time when people were arguing about whether it was a motion, a gas, or a liquid. But heat turned out to be “objective,” i.e., something which seems to exist irrespective of how we model it. Likewise, before Darwin there was some confusion over what different dog breeds were: many people considered them to be different “varieties” which was basically just a word for “not different species, but still kind of different.” Darwin claimed, and I believe him, that people would give different answers about whether these were different species or merely different varieties based on context and their history (e.g., if a naturalist had never seen dogs, then they’d probably call them different species, if they had, they’d call them different varieties). As it turns out, there’s an underlying “objective” thing here, which is how much their genomes differ from each other (I think? Not an evolutionary biologist :p). In any case, it seems to me that it is often the case that before scientific concepts are totally sussed out there is disagreement over how to model the thing they are pointing at, but that this doesn’t on its own imply that it’s inherently subjective.
A potential crux. There is a further thing you might mean here, what Dennett calls “the indeterminacy of interpretation,” which is that there is just no fact of the matter to what is agentic. Like, people might have disagreed about what heat was for a while, but it turned out that heat is more-or-less objective. The concept “hot,” on the other hand, is more subjective: just a property of how the neurons in a particular body/mind are tuned. In other words, the answer to whether something is hot is basically just “mu”—there is no fact of the matter about it. I am guessing that you think agency is of the latter type; I think it is of the former, i.e., I think we just haven’t pinned down the concepts in agency well enough to all agree on them yet, but that there is something “actually there” which we are pointing at. This might be our crux?
Abstractions are not all subjective? I am generally pretty confused by the stance that all “high-level abstractions” are subjective (although I don’t know what you mean by “high-level.”) I think (based on citing Jaynes) that you are saying something like “abstractions are reflections of our own ignorance.” E.g., we talk about temperature as some abstract thing because we are uncertain about the particular microstate that underlies it. But it seems to me that if you take this stance then you have to call basically everything subjective, e.g., an orange sitting right in front of me is subjective because I am ignorant of its exact atomic makeup. This seems a little weird to me, like oranges wouldn’t go away if we became fully certain about them? Likewise, I don’t think agency goes away if we become less ignorant of it.
Agents are more like “oranges” than they are like “hot.” To me, agents seem clearly in the “orange” category, rather than the “hot” category. Sure, we might currently call different things agents at different times, but to me it seems clear that there is something “real” there that exists aside from our perceptual machinery/interpretation layer. Like, the fact that agents consume order (negentropy) from their environment to spend on the order they care about is one such example of something “objective-ish” about agents, i.e., a real regularity happening in the territory, not just relative to our models of it.
Why do we disagree about what’s agentic, then? On my model, part of the reason that people vary on what they call agentic is because (I suspect) “agency” is not going to be a coherent concept in itself, but rather break out into multiple concepts which all contribute to our sense of it, such that many things we currently consider to be edge cases can be explained by one or a few factors being missing (or diminished). Likewise, I do expect that it is not entirely categorical, but that things can have more or less of it, and have more or less at different times (i.e., that a particular human varies in its ‘agent-ness’ over time). Neither of these seem incongruent to me with the idea that it’s objective-ish, just that we haven’t clarified what we mean by agency yet.
Yup, exactly. And good explanations, this is a great comment all around.
In the final paragraph, I’m uncertain if you are thinking about “agency” being broken into components which make up the whole concept, or thinking about the category being split into different classes of things, some of which may have intersecting examples. (or both?) I suspect both would be helpful. Agency can be described in terms of components like measurement/sensory, calculations, modeling, planning, comparisons to setpoints/goals, taking actions. Probably not that exact set, but then examples of agent like things could naturally be compared on each component, and should fall into different classes. Exploring the classes I suspect would inform the set of components and the general notion of “agency”.
I guess to get work on that done it would be useful to have a list of prospective agent components, a set of examples of agent shaped things, and then of course to describe each agent in terms of the components. What I’m describing, does it sound useful? Do you know of any projects doing this kind of thing?
On the topic of map-territory correspondence, (is there a more concise name for that?) I quite like your analogies, running with them a bit, it seems like there are maybe 4 categories of map-territory correspondence;
Orange-like: It exists as a natural abstraction in the territory and so shows up on many maps.
Hot-like: It exists as a natural abstraction of a situation. A fire is hot in contrast to the surrounding cold woods. A sunny day is hot in contrast to the cold rainy days that came before it.
Heat-like: A natural abstraction of the natural abstraction of the situation, or alternatively, comparing the temperature of 3, rather than only 2, things. It might be natural to jump straight to the abstraction of a continuum of things being hot or not relative to one another, but it also seems natural to instead not notice homeostasis, and only to categorize the hot and cold in the environment that push you out of homeostasis.
Indeterminate: There is no natural abstraction underneath this thing. People either won’t consistently converge to it, or if they do, it is because they are interacting with other people (so the location could easily shift, since the convergence is to other maps, not to territory), or because of some other mysterious force like happenstance or unexplained crab shape magic.
It feels like “heat-like” might be the only real category in some kind of similarity clusters kind of way, but also “things which are using a measurement proxy to compare the state of reality against a setpoint and taking different actions based on the difference between the measurement result and the setpoint” seems like a specific enough thing when I think about it that you could divide all parts of the universe into being either definitely in or definitely out of that category, which would make it a strong candidate for being a natural abstraction, and I suspect it’s not the only category like that.
I wouldn’t be surprised if there were indeterminate things in shared maps, and in individual maps, but I would be very surprised if there were many examples in shared maps that were due to happenstance instead of being due to convergence of individual happenstance indeterminate things converging during map comparison processes. Also, weirdly, the territory containing map making agents which all mark a particular part of their maps may very well be a natural abstraction, that is, the mark existing at a particular point on the maps might be a real thing, but not the corresponding spot in territory. I’m thinking this is related to a Schelling point or Nash Equilibrium, or maybe also related to human biases. Although, those seem to do more with brain hardware than agent interactions. A better example might be the sound of words: arbitrary, except that they must match the words other people are using.
Unrelated epistemological game; I have a suspicion that for any example of a thing that objectively exists, I can generate an ontology in which it would not. For the example of an orange, I can imagine an ontology in which “seeing an orange”, “picking a fruit”, “carrying food”, and “eating an orange” all exist, but an orange itself outside of those does not. Then, an orange doesn’t contain energy, since an orange doesn’t exist, but “having energy” depends on “eating an orange” which depends on “carrying food” and so on, all without the need to be able to think of an orange as an object. To describe an orange you would need to say [[the thing you are eating when you are][eating an orange]], and it would feel in between concepts in the same way that in our common ontology “eating an orange” feels like the idea between “eating” and “orange”.
I’m not sure if this kind of ontology:
Doesn’t exist because separating verbs from nouns is a natural abstraction that any agent modeling any world would converge to.
Does exist in some culture with some language I’ve never heard of.
Does exist in some subset of the population in a similar way to how some people have aphantasia.
Could theoretically exist, but doesn’t by fluke.
Doesn’t exist because it is not internally consistent in some other way.
I suspect it’s the first, but it doesn’t seem inescapably true, and now I’m wondering if this is a worthwhile thought experiment, or the sort of thing I’m thinking because I’m too sleepy. Alas :-p