… choosing a utility function that is easy to maximize …
Where in the TLP do you see this?
I understand, and I did respond to that question.
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
If you think so …
Do you actually see this as controversial?
… then describe a situation where reasonable complex ideally rational agents would …
If you think that this is relevant, then explain how you think that a model that only works for ideally rational agents is useful for arguing about what values actual humans should give to an AI.
I deny that this is an accurate description of Phil’s concept.
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
… choosing a utility function that is easy to maximize …
Where in the TLP do you see this?
Phil is trying to find a combined value system that minimizes conflicts between values. This would allow tradeoffs to be avoided. (Figuring out which tradeoffs to make when your actual values conflict is a huge strength of utility functions.) Do you see another reason to be interested in this comparison of value system combinations?
I understand, and I did respond to that question.
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
Do you have to respond to everything with an inane question? Your base level question has been answered.
If you think so …
Do you actually see this as controversial?
I see it as an unsupported claim. I see this question as useless rhetoric that distracts from your claims lack of support, and the points I was making. So, let’s bring this back to the object level. Do you see a scenario where a group of ideally rational agents would want to combine their utility functions using this procedure? If you think it is only useful for more general agents to cope with their irrationality, do you see a scenario where a group of ideally rational agents who each care about a different general agent (and want the general agent to be effective at maximising its own fixed utility function) would advise the general agents they care about to combine their utility functions in this manner?
It is not a description of his concept.
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question. It would make this discussion a lot easier if you did not flatly deny reality.
It is a question about your grounds for dismissing his model without any explanation.
Do you always accuse people of dismissing models without explanation when they in fact have dismissed a model with an explanation? (If you forgot, the explantion is that the model is trying to figure which combined value system/utility function is easiest to satisfy/maximise instead of which one best represents the input value systems/utility functions that represent the actual values of the group members.)
How do you like being asked questions which contain assumptions you disagree with?
I think it’d be a good policy to answer the question before discussing why it might be misguided. If you don’t answer the question and only talk about it, you end up running in circles and not making progress.
For example:
Instead of
Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
I deny that this is an accurate description of Phil’s concept....
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question
It could be
Is there always a problem with a concept if it can be extended to cover situations other than the most idealized ones?
No, of course not. I deny that this is an accurate description of Phil’s concept....
Where in the TLP do you see this?
Do you talk about all useful things in all contexts? Otherwise, how is an explanation of why it is valuable a reasonable response to a question about what you did in a specific context?
Do you actually see this as controversial?
If you think that this is relevant, then explain how you think that a model that only works for ideally rational agents is useful for arguing about what values actual humans should give to an AI.
It is not a description of his concept. It is a question about your grounds for dismissing his model without any explanation.
Phil is trying to find a combined value system that minimizes conflicts between values. This would allow tradeoffs to be avoided. (Figuring out which tradeoffs to make when your actual values conflict is a huge strength of utility functions.) Do you see another reason to be interested in this comparison of value system combinations?
Do you have to respond to everything with an inane question? Your base level question has been answered.
I see it as an unsupported claim. I see this question as useless rhetoric that distracts from your claims lack of support, and the points I was making. So, let’s bring this back to the object level. Do you see a scenario where a group of ideally rational agents would want to combine their utility functions using this procedure? If you think it is only useful for more general agents to cope with their irrationality, do you see a scenario where a group of ideally rational agents who each care about a different general agent (and want the general agent to be effective at maximising its own fixed utility function) would advise the general agents they care about to combine their utility functions in this manner?
A concept that “can be extended to cover situations other than the most idealized ones” is your description of Phil’s concept contained in your question. It would make this discussion a lot easier if you did not flatly deny reality.
Do you always accuse people of dismissing models without explanation when they in fact have dismissed a model with an explanation? (If you forgot, the explantion is that the model is trying to figure which combined value system/utility function is easiest to satisfy/maximise instead of which one best represents the input value systems/utility functions that represent the actual values of the group members.)
How do you like being asked questions which contain assumptions you disagree with?
I think it’d be a good policy to answer the question before discussing why it might be misguided. If you don’t answer the question and only talk about it, you end up running in circles and not making progress.
For example:
Instead of
It could be