The purpose of a Roomba is to clean rooms. Clean rooms are what it behaves as though it “values”—whereas its “beliefs” would refer to things like whether it has just banged into a wall.
There seems to be little problem in modelling the Roomba as an expected utility maximiser—though it is a rather trivial one.
That is only true if understood to mean the purpose which the user of a Roomba is using it to achieve, or the purpose of its designers in designing it. It is not necessarily the Roomba’s own purpose, the thing the Roomba itself is trying to achieve. To determine the Roomba’s own purposes, one must examine its internal functioning, and discover what those purposes are; or, alternatively, to conduct the Test For The Controlled Variable. This is straightforward and unmysterious.
I have a Roomba. My Roomba can tell if some part of the floor is unusually dirty (by an optical sensor in the dust intake, I believe), and give that area special attention until it is no longer filthy. Thus, it has a purpose of eliminating heavy dirt. However, beyond that it has no perception of whether the room is clean. It does not stop when the room is clean, but when it runs out of power or I turn it off. Since it has no perception of a clean room, it can have no intention of achieving a clean room. I have that intention when I use it. Its designers have the intention that I can use the Roomba to achieve my intention. But the Roomba does not have that intention.
A Roomba with a more sensitive detector of dust pickup (and current models might have such a sensor—mine is quite old) could indeed continue operation until the whole room was clean. The Roomba’s physical sensors sense only a few properties of its immediate environment, but it would be able to synthesize from those a perception of the whole room being clean, in terms of time since last detection of dust pickup, and its algorithm for ensuring complete coverage of the accessible floor space. Such a Roomba would have cleaning the whole room as its purpose. My more primitive model does not.
There seems to be little problem in modelling the Roomba as an expected utility maximiser—though it is a rather trivial one.
Little or large, you can’t do it by handwaving like that. A model of a Roomba as a utility maximiser would (1) state the utility function, and (2) demonstrate how the physical constitution of the Roomba causes it to perform actions which, from among those available to it, do in fact maximise that function.But I suspect you have not done these.
You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
It has sensors enough to allow it to attain that goal. It can’t tell if a whole room is clean—but I never claimed it could do that. You don’t need to have such sensors to be effective at cleaning rooms.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
To me, the distinction between a purposive machine’s own purposes, and the purposes of its designers and users is something that it is esssential to be clear about. It is very like the distinction between fitness-maximising and adaptation-executing.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
As a matter of fact, you would have to do just that (or build an actual one), had suspension bridges not already been built, and having already well-known principles of operation, allowing us to stand on the shoulders of those who first worked out the design. That is, you would have to show that the scheme of suspending the deck by hangers from cables strung between towers would actually do the job. Typically, using one of these when it comes to the point of working out an actual design and predicting how it will respond to stresses.
If you’re not actually going to build it then a BOTE calculation may be enough to prove the concept. But there must be a technical explanation or it’s just armchair verbalising.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
If this is a summary of something well-known, please point me to a web link. I am familiar with stuff like this and see there no basis for this sweeping claim. The word “intelligent” in the above also needs clarifying.
What is a Roomba’s utility function? Or if a Roomba is too complicated, what is a room thermostat’s utility function? Or is that an unintelligent agent and therefore outside the scope of your claim?
By all means distingush between a machine’s purpose, and that which its makers intended for it.
Those ideas are linked, though. Designers want to give the intended purpose of intelligent machines to the machines themselves—so that they do what they were intended to.
“If the utility function is expressed as in a Turing-complete lanugage, the framework represents a remarkably-general model of intelligent agents—one which is capable of representing any pattern of behavioural responses that can itself be represented computationally.”
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too. The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too.
I really don’t see how, Roombas or thermostats, so let’s take the thermostat as it’s simpler.
The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
What, precisely, is that utility function?
You can tautologically describe any actor as maximising utility, just by defining the utility of whatever action it takes as 1 and the utility of everything else as zero. I don’t see any less trivial ascription of a utility function to a thermostat. The thermostat simply turns the heating on and off (or up and down continuously) according to the temperature it senses. How do you read the computation of a utility function, and decision between alternative of differing utility, into that apparatus?
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences. It lets you know which region the agent wants to steer the future towards.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
What’s that, weak Bayesian evidence that tautological, epiphenomenal utility functions are useful?
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences.
Supposing for the sake of argument that there even is any such thing as a utility function, both it and beliefs are subject to environmental influences. No part of any biological agent is fixed. As for man-made ones, they are constituted however they were designed, which may or may not include utility functions and beliefs. Show me this decomposition for a thermostat, which you keep on claiming has a utility function, but which you have still not exhibited.
What you do changes who you are. Is your utility function the same as it was ten years ago? Twenty? Thirty? Yesterday? Before you were born?
Thanks for your questions. However, this discussion seems to have grown too tedious and boring to continue—bye.
Well, quite. Starting from here the conversation went:
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“Kthxbye.”
It would have been more interesting if you had shown the utility functions that you claim these simple systems embody. At the moment they look like invisible dragons.
The purpose of a Roomba is to clean rooms. Clean rooms are what it behaves as though it “values”—whereas its “beliefs” would refer to things like whether it has just banged into a wall.
There seems to be little problem in modelling the Roomba as an expected utility maximiser—though it is a rather trivial one.
That is only true if understood to mean the purpose which the user of a Roomba is using it to achieve, or the purpose of its designers in designing it. It is not necessarily the Roomba’s own purpose, the thing the Roomba itself is trying to achieve. To determine the Roomba’s own purposes, one must examine its internal functioning, and discover what those purposes are; or, alternatively, to conduct the Test For The Controlled Variable. This is straightforward and unmysterious.
I have a Roomba. My Roomba can tell if some part of the floor is unusually dirty (by an optical sensor in the dust intake, I believe), and give that area special attention until it is no longer filthy. Thus, it has a purpose of eliminating heavy dirt. However, beyond that it has no perception of whether the room is clean. It does not stop when the room is clean, but when it runs out of power or I turn it off. Since it has no perception of a clean room, it can have no intention of achieving a clean room. I have that intention when I use it. Its designers have the intention that I can use the Roomba to achieve my intention. But the Roomba does not have that intention.
A Roomba with a more sensitive detector of dust pickup (and current models might have such a sensor—mine is quite old) could indeed continue operation until the whole room was clean. The Roomba’s physical sensors sense only a few properties of its immediate environment, but it would be able to synthesize from those a perception of the whole room being clean, in terms of time since last detection of dust pickup, and its algorithm for ensuring complete coverage of the accessible floor space. Such a Roomba would have cleaning the whole room as its purpose. My more primitive model does not.
This is elementary stuff that people should know.
Little or large, you can’t do it by handwaving like that. A model of a Roomba as a utility maximiser would (1) state the utility function, and (2) demonstrate how the physical constitution of the Roomba causes it to perform actions which, from among those available to it, do in fact maximise that function.But I suspect you have not done these.
You seem engaged in pointless hair-splitting. The Roomba’s designers wanted it to clean floors. It does clean floors. That is what it is for. That is its aim, its goal.
It has sensors enough to allow it to attain that goal. It can’t tell if a whole room is clean—but I never claimed it could do that. You don’t need to have such sensors to be effective at cleaning rooms.
As for me having to exhibit a whole model of a Roomba to illustrate that such a model could be built—that is crazy talk. You might as well argue that I have to exhibit a model of a suspension bridge to illustrate that such a model could be built.
The utility maximiser framework can model the actions of any computable intelligent agent—including a Roomba. That is, so long as the utility function may be expressed in a Turing-complete language.
To me, the distinction between a purposive machine’s own purposes, and the purposes of its designers and users is something that it is esssential to be clear about. It is very like the distinction between fitness-maximising and adaptation-executing.
As a matter of fact, you would have to do just that (or build an actual one), had suspension bridges not already been built, and having already well-known principles of operation, allowing us to stand on the shoulders of those who first worked out the design. That is, you would have to show that the scheme of suspending the deck by hangers from cables strung between towers would actually do the job. Typically, using one of these when it comes to the point of working out an actual design and predicting how it will respond to stresses.
If you’re not actually going to build it then a BOTE calculation may be enough to prove the concept. But there must be a technical explanation or it’s just armchair verbalising.
If this is a summary of something well-known, please point me to a web link. I am familiar with stuff like this and see there no basis for this sweeping claim. The word “intelligent” in the above also needs clarifying.
What is a Roomba’s utility function? Or if a Roomba is too complicated, what is a room thermostat’s utility function? Or is that an unintelligent agent and therefore outside the scope of your claim?
By all means distingush between a machine’s purpose, and that which its makers intended for it.
Those ideas are linked, though. Designers want to give the intended purpose of intelligent machines to the machines themselves—so that they do what they were intended to.
As I put it on:
http://timtyler.org/expected_utility_maximisers/
“If the utility function is expressed as in a Turing-complete lanugage, the framework represents a remarkably-general model of intelligent agents—one which is capable of representing any pattern of behavioural responses that can itself be represented computationally.”
If expections are not enforced, this can be seen by considering the I/O streams of an agent—and considering the utility function to be a function that computes the agent’s motor outputs, given its state and sensory inputs. The possible motor outputs are ranked, assigned utilities—and then the action with the highest value is taken.
That handles any computable relationship between inputs and outputs—and it’s what I mean when I say that you can model a Roomba as a utility maximiser.
The framework handles thermostats too. The utility function produces its motor outputs in response to its sensory inputs. With, say, a bimetallic strip, the function is fairly simple, since the output (deflection) is proportional to the input (temperature).
I really don’t see how, Roombas or thermostats, so let’s take the thermostat as it’s simpler.
What, precisely, is that utility function?
You can tautologically describe any actor as maximising utility, just by defining the utility of whatever action it takes as 1 and the utility of everything else as zero. I don’t see any less trivial ascription of a utility function to a thermostat. The thermostat simply turns the heating on and off (or up and down continuously) according to the temperature it senses. How do you read the computation of a utility function, and decision between alternative of differing utility, into that apparatus?
The Pythagorean theorem is “tautological” too—but that doesn’t mean it is not useful.
Decomposing an agent into its utility function and its beliefs tells you which part of the agent is fixed, and which part is subject to environmental influences. It lets you know which region the agent wants to steer the future towards.
There’s a good reason why humans are interested in people’s motivations—they are genuinely useful for understanding another system’s behaviour. The same idea illustrates why knowing a system’s utility function is interesting.
That doesn’t follow. The reason why we find it useful to know people’s motivations is because they are capable of a very wide range of behavior. With such a wide range of behavior, we need a way to quickly narrow down the set of things we will expect them to do. Knowing that they’re motivated to achieve result R, we can then look at just the set of actions or events that are capable of bringing about R.
Given the huge set of things humans can do, this is a huge reduction in the search space.
OTOH, if I want to predict the behavior of a thermostat, it does not help to know the utility function you have imputed to it, because this would not significantly reduce the search space compared to knowing its few pre-programmed actions. It can only do a few things in the first place, so I don’t need to think in terms of “what are all the ways it can achieve R?”—the thermostat’s form already tells me that.
Nevertheless, despite my criticism of this parallel, I think you have shed some light on when it is useful to describe a system in terms of a utility function, at least for me.
See also
What’s that, weak Bayesian evidence that tautological, epiphenomenal utility functions are useful?
Supposing for the sake of argument that there even is any such thing as a utility function, both it and beliefs are subject to environmental influences. No part of any biological agent is fixed. As for man-made ones, they are constituted however they were designed, which may or may not include utility functions and beliefs. Show me this decomposition for a thermostat, which you keep on claiming has a utility function, but which you have still not exhibited.
What you do changes who you are. Is your utility function the same as it was ten years ago? Twenty? Thirty? Yesterday? Before you were born?
Thanks for your questions. However, this discussion seems to have grown too tedious and boring to continue—bye.
Well, quite. Starting from here the conversation went:
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“They exist.”
“Show me.”
“Kthxbye.”
It would have been more interesting if you had shown the utility functions that you claim these simple systems embody. At the moment they look like invisible dragons.