Let’s imagine that True Virtue is seeking and eating ice cream, but that you don’t know what true virtue is for some reason.
Now let’s imagine that we have some algorithm for turning intelligence into virtuous agency. (This is not an assumption that I’m willing to grant (since you haven’t given something like argmax for virtue), and really that’s the biggest issue with my proposal, but let’s entertain it to see my point.)
If the algorithm is run on the basis of some implementation of intelligence that is not good enough, then the resulting agent might turn down some opportunities to get ice cream, by mistake, and instead do something else, such as pursue money (but less money than you could get the ice cream for). As a result of this, you would conclude that pursuing ice cream is not virtuous, or at least, not as virtuous as pursuing money.
If you then turn up the level of intelligence, the resulting agent would pursue ice cream in this situation where it previously pursued virtue. However, this would make it score worse on your inferred utility function where pursuing money is more virtuous than pursuing intelligence.
Now of course you could say that your conclusion that pursuing ice cream is less virtuous than pursuing money is wrong. But then you can only say that if you grant that you cannot infer a virtue-ethical utility function from a virtue-ethical policy, as this utility function was inferred from the policy.
“infer a virtue-ethical utility function from a virtue-ethical policy”
The assumption of virtue ethics isn’t that virtue is unknown and must be discovered—it’s that it’s known and must be pursued. If the virtuous action, as you posit, is to consume ice cream, intelligence would allow an agent to acquire more ice cream, eat more over time by not making themselves sick, etc.
But any such decision algorithm, for a virtue ethicist, is routing through continued re-evaluation of whether the acts are virtuous, in the current context, not embracing some farcical LDT version of needing to pursue ice cream at all costs. There is an implicit utility function which values intelligence, but it’s not then inferring back what virtue is, as you seem to claim. Your assumption, which is evidently that the entire thing turns into a compressed and decontextualized utility function (“algorithm”) is ignoring the entire hypothetical.
The assumption of virtue ethics isn’t that virtue is unknown and must be discovered—it’s that it’s known and must be pursued.
If it is known, then why do you not ever answer my queries about providing an explicit algorithm for converting intelligence into virtuous agency, instead running in circles about how There Must Be A Utility Function!?
If the virtuous action, as you posit, is to consume ice cream, intelligence would allow an agent to acquire more ice cream, eat more over time by not making themselves sick, etc.
I’m not disagreeing with this, I’m saying that if you apply the arguments which show that you can fit a utility function to any policy to the policies that turn down some ice cream, then as you increase intelligence and that increases the pursuit of ice cream, the resulting policies will score lower on the utility function which values turning down ice cream.
But any such decision algorithm, for a virtue ethicist, is routing through continued re-evaluation of whether the acts are virtuous, in the current context, not embracing some farcical LDT version of needing to pursue ice cream at all costs. Your assumption, which is evidently that the entire thing turns into a compressed and decontextualized utility function (“algorithm”) is ignoring the entire hypothetical.
You’re the one who said that virtue ethics implies a utility function! I didn’t say anything about it being compressed and decontextualized, except as a hypothetical example of what virtue ethics is because you refused to provide an implementation of virtue ethics and instead require abstracting over it.
I’m not interested in continuing this conversation until you stop strawmanning me.
No, that’s not my argument.
Let’s imagine that True Virtue is seeking and eating ice cream, but that you don’t know what true virtue is for some reason.
Now let’s imagine that we have some algorithm for turning intelligence into virtuous agency. (This is not an assumption that I’m willing to grant (since you haven’t given something like argmax for virtue), and really that’s the biggest issue with my proposal, but let’s entertain it to see my point.)
If the algorithm is run on the basis of some implementation of intelligence that is not good enough, then the resulting agent might turn down some opportunities to get ice cream, by mistake, and instead do something else, such as pursue money (but less money than you could get the ice cream for). As a result of this, you would conclude that pursuing ice cream is not virtuous, or at least, not as virtuous as pursuing money.
If you then turn up the level of intelligence, the resulting agent would pursue ice cream in this situation where it previously pursued virtue. However, this would make it score worse on your inferred utility function where pursuing money is more virtuous than pursuing intelligence.
Now of course you could say that your conclusion that pursuing ice cream is less virtuous than pursuing money is wrong. But then you can only say that if you grant that you cannot infer a virtue-ethical utility function from a virtue-ethical policy, as this utility function was inferred from the policy.
The assumption of virtue ethics isn’t that virtue is unknown and must be discovered—it’s that it’s known and must be pursued. If the virtuous action, as you posit, is to consume ice cream, intelligence would allow an agent to acquire more ice cream, eat more over time by not making themselves sick, etc.
But any such decision algorithm, for a virtue ethicist, is routing through continued re-evaluation of whether the acts are virtuous, in the current context, not embracing some farcical LDT version of needing to pursue ice cream at all costs. There is an implicit utility function which values intelligence, but it’s not then inferring back what virtue is, as you seem to claim. Your assumption, which is evidently that the entire thing turns into a compressed and decontextualized utility function (“algorithm”) is ignoring the entire hypothetical.
If it is known, then why do you not ever answer my queries about providing an explicit algorithm for converting intelligence into virtuous agency, instead running in circles about how There Must Be A Utility Function!?
I’m not disagreeing with this, I’m saying that if you apply the arguments which show that you can fit a utility function to any policy to the policies that turn down some ice cream, then as you increase intelligence and that increases the pursuit of ice cream, the resulting policies will score lower on the utility function which values turning down ice cream.
You’re the one who said that virtue ethics implies a utility function! I didn’t say anything about it being compressed and decontextualized, except as a hypothetical example of what virtue ethics is because you refused to provide an implementation of virtue ethics and instead require abstracting over it.
I’m not interested in continuing this conversation until you stop strawmanning me.