I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
Would making paperclips become valuable if we created a paperclip maximiser?
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
An entity that didn’t care about goals would never do anything at all.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
See also
Hard to be original anymore. Which is a good sign!