I am responding to Phil’s confusion about utility functions and values, to dissolve the wrong question that the post is trying to answer.
Why are you talking about idealized von Neumann Morgenstern agents (that have utility functions)?
It is useful to understand ideally rational agents when figuring out how you can be more rational. The incompatibility between the concept of an ideally rational agent’s utility function and Phil’s concept of value systems indicates problems in Phil’s concept.
I am responding to Phil’s confusion about utility functions and values …
Do you hold that it is always a confusion to talk about what is rather than about what should be?
It is useful to …
I meant ‘why did you talk about it in that exact context’, not ‘why do you ever talk about it’.
The incompatibility between the concept of an ideally rational agent’s utility function and Phil’s concept of value systems indicates problems in Phil’s concept.
I don’t see that. After all, it is not impossible or hard to describe a von Neumann Morgenstern agent in Phil’s system. They are a subset of the agents that he wrote about. Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
Do you hold that it is always a confusion to talk about what is rather than about what should be?
The confusion is thinking that maximizing utility includes choosing a utility function that is easy to maximize. If you really have something to protect, you want your utility function to represent that, no matter how hard it makes it to maximize your utility function. If you are looking for a group utility function, you should be concerned with what best represents the group members, given their relative negotiating power, not what sort of average or other combination is easiest to maximize.
I meant ‘why did you talk about it in that exact context’, not ‘why do you ever talk about it’.
I understand, and I did respond to that question.
After all, it is not impossible or hard to describe a von Neumann Morgenstern agent in Phil’s system. They are a subset of the agents that he wrote about.
If you think so, then describe a situation where reasonable complex ideally rational agents would want to combine their utility functions in the way that Phil is suggesting. (I don’t think this even makes sense if they assign non linear utility to values within the ranges achievable in the environment.)
Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
I deny that this is an accurate description of Phil’s concept. My criticism is that agents have to make serious rationality mistakes in order to care about Phil’s reasons for recommending this combining process.
… choosing a utility function that is easy to maximize …
Where in the TLP do you see this?
I understand, and I did respond to that question.
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
If you think so …
Do you actually see this as controversial?
… then describe a situation where reasonable complex ideally rational agents would …
If you think that this is relevant, then explain how you think that a model that only works for ideally rational agents is useful for arguing about what values actual humans should give to an AI.
I deny that this is an accurate description of Phil’s concept.
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
… choosing a utility function that is easy to maximize …
Where in the TLP do you see this?
Phil is trying to find a combined value system that minimizes conflicts between values. This would allow tradeoffs to be avoided. (Figuring out which tradeoffs to make when your actual values conflict is a huge strength of utility functions.) Do you see another reason to be interested in this comparison of value system combinations?
I understand, and I did respond to that question.
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
Do you have to respond to everything with an inane question? Your base level question has been answered.
If you think so …
Do you actually see this as controversial?
I see it as an unsupported claim. I see this question as useless rhetoric that distracts from your claims lack of support, and the points I was making. So, let’s bring this back to the object level. Do you see a scenario where a group of ideally rational agents would want to combine their utility functions using this procedure? If you think it is only useful for more general agents to cope with their irrationality, do you see a scenario where a group of ideally rational agents who each care about a different general agent (and want the general agent to be effective at maximising its own fixed utility function) would advise the general agents they care about to combine their utility functions in this manner?
It is not a description of his concept.
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question. It would make this discussion a lot easier if you did not flatly deny reality.
It is a question about your grounds for dismissing his model without any explanation.
Do you always accuse people of dismissing models without explanation when they in fact have dismissed a model with an explanation? (If you forgot, the explantion is that the model is trying to figure which combined value system/utility function is easiest to satisfy/maximise instead of which one best represents the input value systems/utility functions that represent the actual values of the group members.)
How do you like being asked questions which contain assumptions you disagree with?
I think it’d be a good policy to answer the question before discussing why it might be misguided. If you don’t answer the question and only talk about it, you end up running in circles and not making progress.
For example:
Instead of
Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
I deny that this is an accurate description of Phil’s concept....
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question
It could be
Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
No, of course not. I deny that this is an accurate description of Phil’s concept....
I am responding to Phil’s confusion about utility functions and values, to dissolve the wrong question that the post is trying to answer.
It is useful to understand ideally rational agents when figuring out how you can be more rational. The incompatibility between the concept of an ideally rational agent’s utility function and Phil’s concept of value systems indicates problems in Phil’s concept.
Do you hold that it is always a confusion to talk about what is rather than about what should be?
I meant ‘why did you talk about it in that exact context’, not ‘why do you ever talk about it’.
I don’t see that. After all, it is not impossible or hard to describe a von Neumann Morgenstern agent in Phil’s system. They are a subset of the agents that he wrote about. Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
The confusion is thinking that maximizing utility includes choosing a utility function that is easy to maximize. If you really have something to protect, you want your utility function to represent that, no matter how hard it makes it to maximize your utility function. If you are looking for a group utility function, you should be concerned with what best represents the group members, given their relative negotiating power, not what sort of average or other combination is easiest to maximize.
I understand, and I did respond to that question.
If you think so, then describe a situation where reasonable complex ideally rational agents would want to combine their utility functions in the way that Phil is suggesting. (I don’t think this even makes sense if they assign non linear utility to values within the ranges achievable in the environment.)
I deny that this is an accurate description of Phil’s concept. My criticism is that agents have to make serious rationality mistakes in order to care about Phil’s reasons for recommending this combining process.
Where in the TLP do you see this?
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
Do you actually see this as controversial?
If you think that this is relevant, then explain how you think that a model that only works for ideally rational agents is useful for arguing about what values actual humans should give to an AI.
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
Phil is trying to find a combined value system that minimizes conflicts between values. This would allow tradeoffs to be avoided. (Figuring out which tradeoffs to make when your actual values conflict is a huge strength of utility functions.) Do you see another reason to be interested in this comparison of value system combinations?
Do you have to respond to everything with an inane question? Your base level question has been answered.
I see it as an unsupported claim. I see this question as useless rhetoric that distracts from your claims lack of support, and the points I was making. So, let’s bring this back to the object level. Do you see a scenario where a group of ideally rational agents would want to combine their utility functions using this procedure? If you think it is only useful for more general agents to cope with their irrationality, do you see a scenario where a group of ideally rational agents who each care about a different general agent (and want the general agent to be effective at maximising its own fixed utility function) would advise the general agents they care about to combine their utility functions in this manner?
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question. It would make this discussion a lot easier if you did not flatly deny reality.
Do you always accuse people of dismissing models without explanation when they in fact have dismissed a model with an explanation? (If you forgot, the explantion is that the model is trying to figure which combined value system/utility function is easiest to satisfy/maximise instead of which one best represents the input value systems/utility functions that represent the actual values of the group members.)
How do you like being asked questions which contain assumptions you disagree with?
I think it’d be a good policy to answer the question before discussing why it might be misguided. If you don’t answer the question and only talk about it, you end up running in circles and not making progress.
For example:
Instead of
It could be