I think the phrase “aware of and capable of making choices” hides most of the complexity I am interested in focusing on. What really is awareness? The word “aware” implies that it is a boolean thing, like “either some system is aware or it is not”, but I think that’s wrong. I think “awareness” varies in amount and kind.
And “making choices” is similarly complicated. The ball could stay put or roll, but it chooses to roll. You could say it never had the choice to do anything but roll because the mechanism which determined its choice to roll, its roundness, is so obvious and exposed, but suppose I understood the mechanisms of some human’s mind well enough to predict that human’s actions with the same accuracy? Would it be right to suggest that humans do not make choices since the choices were determined by the mechanisms by which humans choose?
It seems to me the Physical Stance and the Intentional Stance both describe the same systems. It is my feeling that in order to understand complex decision making systems, such as humans, AI, and sociotechnical systems, we need to have language that can describe them clearly. So I guess what I might be doing here is trying to force an exploration of the boundary between where the physical and intentional stance apply.
I could believe that a symbolic representation of other objects is the quality required to say that a system is aware, but then, is roundness symbolic? Where is the distinction between symbolic and mechanical?
Likewise, I could very much imagine alternative outcomes are required for preference, but then either some system having preferences depends on how well understood it is. That has uncomfortable implications. If an ASI understood humans sufficiently well, would that ASI be justified in claiming that humans do not have preferences? I’m much more comfortable admitting any system that affects outcomes has preferences than denying the preferences of any sufficiently well understood system.
… Oh, also, I didn’t put as much emphasis on it but I really am interested in the question of whether an agent’s preferences exist as an interplay between the world and itself. I feel that would have important implications for Agent Foundations and AI Alignment.
but suppose I understood the mechanisms of some human’s mind well enough to predict that human’s actions with the same accuracy? Would it be right to suggest that humans do not make choices since the choices were determined by the mechanisms by which humans choose?
I don’t think we need to suppose… I’d guess you probably do frequently. You have family members, friends, and/or lovers, or people whom you have intimate knowledge and extremely good track records of predicting their behavior?
If an ASI understood humans sufficiently well, would that ASI be justified in claiming that humans do not have preferences? I’m much more comfortable admitting any system that affects outcomes has preferences than denying the preferences of any sufficiently well understood system.
I don’t think it would be any more justified claiming that humans don’t have preferences than I can claim that anybody I know really well doesn’t have preferences. If you can predict which newspaper or soft-drink your Father buys from the store, that doesn’t mean he had no choice in the matter. If there’s no other newspapers in stock, or only one brand of soft-drink—then he has no choice. But, realistically, you can’t choose alternatives you’re not aware of.
A simple test of whether something is not a choice or not is to ask: “if the agent believed something else or had very different desires, would the outcome be very different?”. If no matter what the agent desires or believes, the outcome would always be the same. Then that’s not a choice.
If someone goes up to the fridge at a store and there’s a orange drink, and a strawberry drink. And you know they love Orange flavor, and so they buy the orange. that’s still a choice. But—imagine you knew they HATED Orange, or if they loved Strawberry instead—hypothetically they would then choose the Strawberry. Therefore it was a choice.
Conversely, imagine a spectator high up on an embankment at a motorrace. They are in a sea of people, a mere spec as seen from the track, so they have no earthly way of affecting the result of the motorrace. There’s twenty racers. It doesn’t matter who this single spectator desires or wishes to win—the result is hypothetically always the same. This is not a choice.
I am not familiar of any credible model were a ball can “desire” to go up, and contingent on that alone, it does. This is why it is best represented by the “physical” stance in Dennet’s typology.
The word “aware” implies that it is a boolean thing, like “either some system is aware or it is not”, but I think that’s wrong. I think “awareness” varies in amount and kind.
Abstractly, I agree with this, and I think there’s a spectrum of awareness in ways that do influence choices. But I’m struggling for examples right now… the best that comes to mind is when a couple are deciding where to go to dinner, and one of them says “let’s have Italian” knowing there is an Italian restaurant, they aren’t strictly aware of the menu, it could include Ragù, Calzone, Osso Buco or dozens of others choices—but they are aware of at least one restaurant nearby, in their price-range, that does “Italian”.
Likewise preferences themselves often exist in parallel. If Orange isn’t available, maybe they go for Banana, or Cherry. And likewise choices made are often prompted by complex decision making models that are operating on dozens of different dimensions or factors, even something as simple as buying a shirt—is it comfortable? do I like the pattern or the colour? is the material breathable? What are the washing instructions? etc. etc. etc.
A lot of this is black box analysis. I’m interested in white box analysis. I guess maybe “black box vs white box” means the same thing as “intentional stance vs physical stance”.
You speak of knowing the preferences of something, with the implication that you have observed the past behaviour of the system and can infer it’s future behaviour based on an abstract model of it’s “intentions” or “preferences”. Is this what is meant by the “intentional stance”? I think so, and it is indeed a valid way to examine the world.
But within a person, and within an AI model, there is some mechanism that causes those preferences to be so… and that is the kind of understanding I am focusing on. Predicting choice to get Orange flavour, not based on past behaviour involving flavour choices or hearing statements about preferences, but by examining the body and brain and brainstate with enough skill to see how and where the preference for orange is encoded, and predicting based on that. Is this the “physical stance”? In that case I think I might be interested in merging the physical and intentional stance.
For example, I might know that balls roll down hills not because I have analyzed them as physical objects, but because I have observed them roll down hills before. Is this not the same as the intentional stance? Modelling the preferences of the ball based on it’s past behaviour?
On the other hand, it isn’t too difficult to understand how roundness vs flatness affect rolling. The flat object stays where it is put and the round object rolls down the hill. You can see mechanically why this is the case, but you could also just as well know this by inference and I would suggest that most people learn about physical laws first by observing the behaviours of objects and only later in life learning about things like friction and force and gravity.
I haven’t noticed anything you have said that categorically distinguishes the behaviour of an object rolling down a hill from the behaviour of a person expressing their preferences by choosing what they want.
Thanks for engaging : )
I think the phrase “aware of and capable of making choices” hides most of the complexity I am interested in focusing on. What really is awareness? The word “aware” implies that it is a boolean thing, like “either some system is aware or it is not”, but I think that’s wrong. I think “awareness” varies in amount and kind.
And “making choices” is similarly complicated. The ball could stay put or roll, but it chooses to roll. You could say it never had the choice to do anything but roll because the mechanism which determined its choice to roll, its roundness, is so obvious and exposed, but suppose I understood the mechanisms of some human’s mind well enough to predict that human’s actions with the same accuracy? Would it be right to suggest that humans do not make choices since the choices were determined by the mechanisms by which humans choose?
It seems to me the Physical Stance and the Intentional Stance both describe the same systems. It is my feeling that in order to understand complex decision making systems, such as humans, AI, and sociotechnical systems, we need to have language that can describe them clearly. So I guess what I might be doing here is trying to force an exploration of the boundary between where the physical and intentional stance apply.
I could believe that a symbolic representation of other objects is the quality required to say that a system is aware, but then, is roundness symbolic? Where is the distinction between symbolic and mechanical?
Likewise, I could very much imagine alternative outcomes are required for preference, but then either some system having preferences depends on how well understood it is. That has uncomfortable implications. If an ASI understood humans sufficiently well, would that ASI be justified in claiming that humans do not have preferences? I’m much more comfortable admitting any system that affects outcomes has preferences than denying the preferences of any sufficiently well understood system.
… Oh, also, I didn’t put as much emphasis on it but I really am interested in the question of whether an agent’s preferences exist as an interplay between the world and itself. I feel that would have important implications for Agent Foundations and AI Alignment.
I don’t think we need to suppose… I’d guess you probably do frequently. You have family members, friends, and/or lovers, or people whom you have intimate knowledge and extremely good track records of predicting their behavior?
I don’t think it would be any more justified claiming that humans don’t have preferences than I can claim that anybody I know really well doesn’t have preferences. If you can predict which newspaper or soft-drink your Father buys from the store, that doesn’t mean he had no choice in the matter. If there’s no other newspapers in stock, or only one brand of soft-drink—then he has no choice. But, realistically, you can’t choose alternatives you’re not aware of.
A simple test of whether something is not a choice or not is to ask: “if the agent believed something else or had very different desires, would the outcome be very different?”. If no matter what the agent desires or believes, the outcome would always be the same. Then that’s not a choice.
If someone goes up to the fridge at a store and there’s a orange drink, and a strawberry drink. And you know they love Orange flavor, and so they buy the orange. that’s still a choice. But—imagine you knew they HATED Orange, or if they loved Strawberry instead—hypothetically they would then choose the Strawberry. Therefore it was a choice.
Conversely, imagine a spectator high up on an embankment at a motorrace. They are in a sea of people, a mere spec as seen from the track, so they have no earthly way of affecting the result of the motorrace. There’s twenty racers. It doesn’t matter who this single spectator desires or wishes to win—the result is hypothetically always the same. This is not a choice.
I am not familiar of any credible model were a ball can “desire” to go up, and contingent on that alone, it does. This is why it is best represented by the “physical” stance in Dennet’s typology.
Abstractly, I agree with this, and I think there’s a spectrum of awareness in ways that do influence choices. But I’m struggling for examples right now… the best that comes to mind is when a couple are deciding where to go to dinner, and one of them says “let’s have Italian” knowing there is an Italian restaurant, they aren’t strictly aware of the menu, it could include Ragù, Calzone, Osso Buco or dozens of others choices—but they are aware of at least one restaurant nearby, in their price-range, that does “Italian”.
Likewise preferences themselves often exist in parallel. If Orange isn’t available, maybe they go for Banana, or Cherry. And likewise choices made are often prompted by complex decision making models that are operating on dozens of different dimensions or factors, even something as simple as buying a shirt—is it comfortable? do I like the pattern or the colour? is the material breathable? What are the washing instructions? etc. etc. etc.
A lot of this is black box analysis. I’m interested in white box analysis. I guess maybe “black box vs white box” means the same thing as “intentional stance vs physical stance”.
You speak of knowing the preferences of something, with the implication that you have observed the past behaviour of the system and can infer it’s future behaviour based on an abstract model of it’s “intentions” or “preferences”. Is this what is meant by the “intentional stance”? I think so, and it is indeed a valid way to examine the world.
But within a person, and within an AI model, there is some mechanism that causes those preferences to be so… and that is the kind of understanding I am focusing on. Predicting choice to get Orange flavour, not based on past behaviour involving flavour choices or hearing statements about preferences, but by examining the body and brain and brainstate with enough skill to see how and where the preference for orange is encoded, and predicting based on that. Is this the “physical stance”? In that case I think I might be interested in merging the physical and intentional stance.
For example, I might know that balls roll down hills not because I have analyzed them as physical objects, but because I have observed them roll down hills before. Is this not the same as the intentional stance? Modelling the preferences of the ball based on it’s past behaviour?
On the other hand, it isn’t too difficult to understand how roundness vs flatness affect rolling. The flat object stays where it is put and the round object rolls down the hill. You can see mechanically why this is the case, but you could also just as well know this by inference and I would suggest that most people learn about physical laws first by observing the behaviours of objects and only later in life learning about things like friction and force and gravity.
I haven’t noticed anything you have said that categorically distinguishes the behaviour of an object rolling down a hill from the behaviour of a person expressing their preferences by choosing what they want.