Here’s my tentative answer to this question. It’s just a dump of some half-baked ideas, but I’d nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.
Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn’t be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)
However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on such a point, or someone believes he can profit from an aggressive intrusion beyond some such point. Recognition of these points is a complex matter, determined by everything from genetics to culture to momentary fashion, and they can be more or less stable and of greater or lesser importance (i.e. overstepping some of them is seen as a trivial annoyance, while on the other extreme, overstepping certain others gives the other party a licence to kill). These points include all the more or less formally stated social and legal norms, property claims, and all the countless other more or less important expectations that we believe we reasonably hold against each other.
So, here is my basic idea: being a virtuous person means recognizing the existing Schelling points correctly, drawing and communicating those points whose exact location depends on you skillfully and prudently—and once they’ve been drawn, committing yourself to defend them relentlessly (so that hopefully, nobody will even see overstepping them at your disadvantage as potentially profitable). An ideal virtuous man by this definition, capable of practical wisdom to make the best possible judgments and determined to respect the others’s lines and defend his own ones, would therefore have the greatest practical likelihood of living his life in harmony and having all his business run smoothly, no matter what his station in life.
A society of such virtuous people would also make possible a higher level of voluntary benevolence in the form of friendship, charity, hospitality, mutual aid, etc., since one could count on others not to exploit maliciously a benevolent attempt at lowering one’s guard on crucially important lines and trying to base human relationships on lines that are more relaxed and pleasant, but harder to defend if push comes to shove. For example, it makes sense to be hospitable if you’re living among people whom you know to be determined not to take advantage of your hospitality, or to be merciful and forgiving if you can be reasonably sure that people’s transgressions are unusual lapses of judgment unlikely to be repeated, rather than due to a persistent malevolent strategy. Thus, in a society populated by virtuous people, it makes sense to apply the label of virtuousness also to characteristics such as charity, friendliness, mercy, hospitality, etc. (but only to the point where one doesn’t let oneself be exploited for them!).
This also seems to clarify the trolley problem-like situations, when we observe that actions that involve your own Schelling boundaries are more important to you than others. You may feel sorry for the folks who will die, perhaps to the point where you’d sacrifice yourself to save them (but perhaps not if this leaves your own kids as poor orphans, since your existing network of tacit agreements involves caring for them). However, pushing the fat man means overstepping the most important and terrible of all Schelling boundaries—that which defines unprovoked deadly aggression against one’s person, and whose violation gives the attacked party the licence to kill you in self-defense. Violating this boundary is such an extreme step that it may be seen as far more drastic than passively witnessing multiple deaths of people in a manner than doesn’t violate any tacit agreements and expectations. (Note though that this perspective is distinct from pure egoism: the tacit agreements in question include a certain limited level of altruism, like e.g. helping a stranger in an emergency, at least by calling 911.)
You may view all this virtue talk as consequentialism with respect to the immensely complex network of Schelling points between humans, which takes into account higher-level game-theoretical consequences of actions, which are more important than the factors covered by the usual utilitarian spherical-cow models. Yet this system is far too complex to allow for any simple model based on utility functions or anything similar. At most, we can formulate advice aimed at individuals on how to make judgments based on the relations that concern them personally in some way and are within their own sphere of accurate comprehension—and the best practical advice that can be formulated basically boils down to some form of virtue ethics.
So, basically, that would be my half-baked summary. I’m curious if anyone thinks that this might make some sense.
Not only does it make sense, I think it’s the most descriptively-accurate summary of how people in the real world act that I’ve seen, which makes it a valuable tool for mapping the territory. I’d love to see it as a top-level post, if you could take the time. I don’t think you’d even have to add much.
It makes plenty of sense to point out that the Schelling points and the associated cooperative customs point to a set of virtues. But it isn’t just consequentialists who can make this point. Some varieties of deontology can do so as well. Habermas’s discourse ethics is one example. Thomas Scanlon’s ethics is another. From the Habermas wiki:
Habermas extracts the following principle of universalization (U), which is the condition every valid norm has to fulfill:
(U) All affected can accept the consequences and the side effects that [the norm’s] general observance can be anticipated to have for the satisfaction of everyone’s interests, and the consequences are preferred to those of known alternative possibilities for regulation. (Habermas, 1991:65)
One can easily understand the “norms” as tacit (or explicit) agreements, existing or proposed. A society reasoning together along those lines would probably look similar in many ways to one reasoning along utilitarian lines, but the root pattern of justification would differ. The utilitarian justification aggregates interests; the deontologist (of Habermas’s sort) justification considers each person’s interests separately, compatible with like consideration for others.
Friedman uses Schelling points in an attempt to explain the origin of the concept of property rights among humans and the associated legal and social norms, but the approach can be generalized in an obvious way to a much wider class of relations between people (basically anything that could hypothetically lead to a conflict, in the broadest possible sense of the term).
Nope. It’s halting your simulation and trading utility function content before you cross the inferential equivalent of the Rawlesian ‘veil of ignorance’ and become unable to engage in timeless trade.
I don’t think it does, though I wasn’t careful to think about it. Some virtues are things like “production of paperclips” only with part of humaneness like love substituted for paperclips (if you are a human). Others are capabilities like alertness or prudence.
I gave the answer I did because I was expressing our common ground with Clippy by naming a candidate for the virtue which serves as a key to the timeless marketplace where he wishes to do business with us.
What Jayson Virissimo said. The simple definition is, “A virtue is a trait of character that is good for the person who has it.”—I feel like that must be a direct quote from somewhere, as I fire off those same words whenever asked that question, but I’m not sure where it might be from (though I’m guessing Richard Volkman).
Many theorists believe that virtues are consistent habits, in the sense that they persist. Weakly, this means that exhibiting a virtue in one circumstance should be usable as evidence that the same agent will exhibit the same virtue in other circumstances. In a stronger version, someone who is (for example) courageous will act as an courageous person would in all circumstances.
Many theorists also believe that virtues represent a mean between extremes, with respect to some value (some would even define them that way, but then the virtues arguably lose some empirical content). So for example, fighting despite being afraid is valuable. The proper disposition towards this is ‘courage’. The relevant vice of deficiency is ‘cowardice’, and the vice of excess is ‘brashness’.
Most of the above was advocated by Aristotle, in the Nicomachean Ethics.
So the ability to steal without getting caught is a virtue?
If it’s good for the person who decides to steal. The first problem is that logical control makes individual decisions into group decisions, so if social welfare suffers, so does the person, as a result of individual decisions. Thus, deciding to steal might make everyone worse off, because it’s the same decision as one made by other people. The second problem is that the act of stealing itself might be terminally undesirable for the person who steals.
I see. So you agree that ability to steal without getting caught is a virtue according to the definition thomblake cited, and see this as a reducio of thomblake’s definition, showing that it doesn’t capture the notion as it’s used in virtue ethics.
My comment was oblivious to your intention, and discussed how much “ability to steal without getting caught” corresponds to thomblake’s definition, without relating that to how well either of these concepts fits “virtues” of virtue ethics.
How do you think that works as a reductio? What is it about your example of a putative virtue that makes it fit my definition, but not the ‘virtues’ of virtue ethics? (is it simply the ‘stronger’ notions of virtue I offered in the same comment?)
I just looked at your objections in another comment, and will try another reductio. Lots of people have the skill to cheat on their spouses and never get caught. Is doing so virtuous? I’m pretty sure this makes them feel happier, and doesn’t interfere with their ability to have meaningful interpersonal relationships :-)
Even granting for the moment that ‘ability to steal without getting caught’ can be called a trait of character, there are empirical claims that the virtue ethicist would make against this.
First, no one actually has that skill—if you steal, eventually you will be caught.
Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
First, no one actually has that skill—if you steal, eventually you will be caught.
Not a valid argument against a hypothetical.
Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
Smoking lesion problem? If developing the skill doesn’t actually cause other problems, and instead the predisposition to develop the skill is correlated to those problems, you should still develop the skill.
It’s not a valid argument against its truth, but it’s a valid argument against its relevance. A hypothetical is useless if its antecedent never obtains.
Smoking lesion problem?
Like I said, it’s an empirical question. For philosophers, that’s usually the end of the inquiry, though it’s very nice when someone goes out and does some experiments to figure out which way causality goes.
Should I understand this question as “What experimental result would cause you to update the probability of that belief to above a particular threshold”? Because my prior for it is pretty high at this point. Or are you looking for the opposite / falsification criteria?
If you’re a good enough driver, there’s a decent chance you’ll never get in a car crash. If you study stealing and security systems enough, and carefully plan, I don’t see why you would be likely to be caught eventually. Why is your prior high?
I’d call that a skill, rather than a character trait. The closest thing I can think of to a beneficial but non-admirable character trait is high-functioning sociopathy; but that’s at least touching the borderline of mental disease, if not clearly crossing it. Perhaps “charming ruthlessness?” But many would consider e.g. Erwin Rommel virtuous in that respect.
But how can there be a vice of excess for making paperclips?
It depends on how good you are at utility-maximization. If you’re bad at it, like humans, then you might need heuristics like virtues to avoid simple failure modes.
An obvious failure mode for Clippys is to have excess concern for making paperclips, which uses up resources that could be used to secure larger-scale paperclip manufacturing capabilities.
Thus you must have the appropriate concern for actually making paperclips, balanced against concerns for future paperclips, trade with other powerful intelligent life forms, optimization arms-races, and so forth.
Good point! But that would only be an excess concern for direct paperclip production. That doesn’t describe a vice of excess for “making paperclips, accounting for all impediments to making paperclips”, such as the impediments you list above.
In any case, what’s the word for the vice you described?
Good point! But that would only be an excess concern for direct paperclip production. That doesn’t describe a vice of excess for “making paperclips, accounting for all impediments to making paperclips”, such as the impediments you list above.
Indeed, Aristotle would call that generalized production of paperclips “the greatest good”, that towards which all other goods aim, which he called eudaimonia.
Well, that might be a liberal reading of Aristotle.
Aristotle actually makes a lot more sense to a paper clip maximizer, the telos being so well defined and all. The question is, how would you explain Sartre to Clippy? “But obviously essence precedes existence!”
That doesn’t seem analogous. I mean, I eventually want to be at the point where I will only be doing direct paperclip production, for most of the mass in the universe. In contrast, humans do not want to eventually be at the point where they consume all of the universe.
Clip-gluttony, then, diverges from gluttony in a significant way.
Clip-gluttony, then, diverges from gluttony in a significant way.
I’m not sure I agree, Clippy. Perhaps my jargon was unclear.
By economic production, I mean taking a unit of inert matter and converting it into a source of potential utility.
By economic consumption, I mean taking a unit of potential utility and converting it into a unit of actual utility.
By physical consumption, I mean taking a unit of matter, inert or otherwise, and disrupting its pattern in order to arrange it in some other pattern.
You’re right, Clippy, to point out that economic consumption and physical consumption amount to much the same thing for you. To make a paper clip, you have to both convert potential utility to actual utility, and disrupt the pattern of non-paper-clip-shaped matter. And, granted, humans don’t necessarily share that trait—they might be able to economically consume a human good like leisure or company while physically consuming very little in the way of material resources.
Still, even though humans don’t want to physically consume the entire universe, like you, we probably do want to reach a point where we are only converting potential utility into actual utility, and doing little or no economic production. We want to reach a state where the existing sources of sustainable human economic production are either (1) maximized, (2) infinite, or (3) sufficient to meet all the demand for human economic consumption. Don’t you want something very similar to that, substituting the word “clip” for the word “human”?
Thus, for both you and for us, gluttony means failing to maximize long-term actual utility by economically consuming too much too soon.
Okay, if you want to generalize the concept of gluttony broadly enough that it has an analog for Clippys, then the definition you have chosen suffices for this purpose, and I can recognize that as being a vice, for two reasons:
a) It is certainly undesirable to merely make paperclips directly without concern for how many more paperclips could be made, over the long term, by doing something else; and
b) I do often feel “temptation” to do such behavior, like bending metal wires when machines could do a better job, just as humans have “temptations” toward vices.
Clippy, how do you overcome this kind of temptation? A human analogy might be refusing to push the fat man, even when it saves more lives, but not everyone considers that a vice.
What’s a virtue, anyway?
Here’s my tentative answer to this question. It’s just a dump of some half-baked ideas, but I’d nevertheless be curious to see some comments on them. This should not be read as a definite statement of my positions, but merely as my present direction of thinking on the subject.
Most interactions between humans are too complex to be described with any accuracy using deontological rules or consequentialist/utilitarian spherical-cow models. Neither of these approaches is capable of providing any practical guidelines for human action that wouldn’t be trivial, absurd, or just sophistical propaganda for the attitudes that the author already holds for other reasons. (One possible exception are economic interactions in which spherical-cow models based on utility functions make reasonably accurate predictions, and sometimes even give correct non-trivial guidelines for action.)
However, we can observe that humans interact in practice using an elaborate network of tacit agreements. These can be seen as Schelling points, so that interactions between people run harmoniously as long as these points are recognized and followed, and conflict ensues when there is a failure to recognize and agree on such a point, or someone believes he can profit from an aggressive intrusion beyond some such point. Recognition of these points is a complex matter, determined by everything from genetics to culture to momentary fashion, and they can be more or less stable and of greater or lesser importance (i.e. overstepping some of them is seen as a trivial annoyance, while on the other extreme, overstepping certain others gives the other party a licence to kill). These points include all the more or less formally stated social and legal norms, property claims, and all the countless other more or less important expectations that we believe we reasonably hold against each other.
So, here is my basic idea: being a virtuous person means recognizing the existing Schelling points correctly, drawing and communicating those points whose exact location depends on you skillfully and prudently—and once they’ve been drawn, committing yourself to defend them relentlessly (so that hopefully, nobody will even see overstepping them at your disadvantage as potentially profitable). An ideal virtuous man by this definition, capable of practical wisdom to make the best possible judgments and determined to respect the others’s lines and defend his own ones, would therefore have the greatest practical likelihood of living his life in harmony and having all his business run smoothly, no matter what his station in life.
A society of such virtuous people would also make possible a higher level of voluntary benevolence in the form of friendship, charity, hospitality, mutual aid, etc., since one could count on others not to exploit maliciously a benevolent attempt at lowering one’s guard on crucially important lines and trying to base human relationships on lines that are more relaxed and pleasant, but harder to defend if push comes to shove. For example, it makes sense to be hospitable if you’re living among people whom you know to be determined not to take advantage of your hospitality, or to be merciful and forgiving if you can be reasonably sure that people’s transgressions are unusual lapses of judgment unlikely to be repeated, rather than due to a persistent malevolent strategy. Thus, in a society populated by virtuous people, it makes sense to apply the label of virtuousness also to characteristics such as charity, friendliness, mercy, hospitality, etc. (but only to the point where one doesn’t let oneself be exploited for them!).
This also seems to clarify the trolley problem-like situations, when we observe that actions that involve your own Schelling boundaries are more important to you than others. You may feel sorry for the folks who will die, perhaps to the point where you’d sacrifice yourself to save them (but perhaps not if this leaves your own kids as poor orphans, since your existing network of tacit agreements involves caring for them). However, pushing the fat man means overstepping the most important and terrible of all Schelling boundaries—that which defines unprovoked deadly aggression against one’s person, and whose violation gives the attacked party the licence to kill you in self-defense. Violating this boundary is such an extreme step that it may be seen as far more drastic than passively witnessing multiple deaths of people in a manner than doesn’t violate any tacit agreements and expectations. (Note though that this perspective is distinct from pure egoism: the tacit agreements in question include a certain limited level of altruism, like e.g. helping a stranger in an emergency, at least by calling 911.)
You may view all this virtue talk as consequentialism with respect to the immensely complex network of Schelling points between humans, which takes into account higher-level game-theoretical consequences of actions, which are more important than the factors covered by the usual utilitarian spherical-cow models. Yet this system is far too complex to allow for any simple model based on utility functions or anything similar. At most, we can formulate advice aimed at individuals on how to make judgments based on the relations that concern them personally in some way and are within their own sphere of accurate comprehension—and the best practical advice that can be formulated basically boils down to some form of virtue ethics.
So, basically, that would be my half-baked summary. I’m curious if anyone thinks that this might make some sense.
Not only does it make sense, I think it’s the most descriptively-accurate summary of how people in the real world act that I’ve seen, which makes it a valuable tool for mapping the territory. I’d love to see it as a top-level post, if you could take the time. I don’t think you’d even have to add much.
It makes plenty of sense to point out that the Schelling points and the associated cooperative customs point to a set of virtues. But it isn’t just consequentialists who can make this point. Some varieties of deontology can do so as well. Habermas’s discourse ethics is one example. Thomas Scanlon’s ethics is another. From the Habermas wiki:
One can easily understand the “norms” as tacit (or explicit) agreements, existing or proposed. A society reasoning together along those lines would probably look similar in many ways to one reasoning along utilitarian lines, but the root pattern of justification would differ. The utilitarian justification aggregates interests; the deontologist (of Habermas’s sort) justification considers each person’s interests separately, compatible with like consideration for others.
I have no idea what a Schelling point is, but the rest of it makes enough sense that I don’t think I’m missing too much—thanks for the explanation!
I recommend this article by David Friedman on the topic—if you’ve never heard of the concept, you’ll probably find lots of interesting insight in it:
http://www.daviddfriedman.com/Academic/Property/Property.html
Friedman uses Schelling points in an attempt to explain the origin of the concept of property rights among humans and the associated legal and social norms, but the approach can be generalized in an obvious way to a much wider class of relations between people (basically anything that could hypothetically lead to a conflict, in the broadest possible sense of the term).
I’m curious, has anyone accused you of being Steve Rayhawk yet?
Production of paperclips.
I can’t believe I didn’t see that coming.
Nope. It’s halting your simulation and trading utility function content before you cross the inferential equivalent of the Rawlesian ‘veil of ignorance’ and become unable to engage in timeless trade.
No, production of paperclips is better than that.
Are you the same as the person I emailed about donating to SIAI?
Yep. I explain a bit more on a nearby thread.
I like that, it generalizes well—but does it cover virtues that don’t fit well under the colloquial label “fairness”?
I don’t think it does, though I wasn’t careful to think about it. Some virtues are things like “production of paperclips” only with part of humaneness like love substituted for paperclips (if you are a human). Others are capabilities like alertness or prudence.
I gave the answer I did because I was expressing our common ground with Clippy by naming a candidate for the virtue which serves as a key to the timeless marketplace where he wishes to do business with us.
In short, it is a disposition to choose actions that are neither excessive nor deficient, but somewhere in between.
What Jayson Virissimo said. The simple definition is, “A virtue is a trait of character that is good for the person who has it.”—I feel like that must be a direct quote from somewhere, as I fire off those same words whenever asked that question, but I’m not sure where it might be from (though I’m guessing Richard Volkman).
Many theorists believe that virtues are consistent habits, in the sense that they persist. Weakly, this means that exhibiting a virtue in one circumstance should be usable as evidence that the same agent will exhibit the same virtue in other circumstances. In a stronger version, someone who is (for example) courageous will act as an courageous person would in all circumstances.
Many theorists also believe that virtues represent a mean between extremes, with respect to some value (some would even define them that way, but then the virtues arguably lose some empirical content). So for example, fighting despite being afraid is valuable. The proper disposition towards this is ‘courage’. The relevant vice of deficiency is ‘cowardice’, and the vice of excess is ‘brashness’.
Most of the above was advocated by Aristotle, in the Nicomachean Ethics.
So the ability to steal without getting caught is a virtue?
If it’s good for the person who decides to steal. The first problem is that logical control makes individual decisions into group decisions, so if social welfare suffers, so does the person, as a result of individual decisions. Thus, deciding to steal might make everyone worse off, because it’s the same decision as one made by other people. The second problem is that the act of stealing itself might be terminally undesirable for the person who steals.
Parent, grandparent and great-grandparent to my comment were all about “virtues” in virtue ethics.
I see. So you agree that ability to steal without getting caught is a virtue according to the definition thomblake cited, and see this as a reducio of thomblake’s definition, showing that it doesn’t capture the notion as it’s used in virtue ethics.
My comment was oblivious to your intention, and discussed how much “ability to steal without getting caught” corresponds to thomblake’s definition, without relating that to how well either of these concepts fits “virtues” of virtue ethics.
Yes, all correct.
How do you think that works as a reductio? What is it about your example of a putative virtue that makes it fit my definition, but not the ‘virtues’ of virtue ethics? (is it simply the ‘stronger’ notions of virtue I offered in the same comment?)
I just looked at your objections in another comment, and will try another reductio. Lots of people have the skill to cheat on their spouses and never get caught. Is doing so virtuous? I’m pretty sure this makes them feel happier, and doesn’t interfere with their ability to have meaningful interpersonal relationships :-)
I think Vladimir Nesov’s response and khafra’s response are correct, but there’s more to be said.
Even granting for the moment that ‘ability to steal without getting caught’ can be called a trait of character, there are empirical claims that the virtue ethicist would make against this.
First, no one actually has that skill—if you steal, eventually you will be caught.
Second, the sort of person who goes around stealing is not the sort of person who can cultivate the social virtues and develop deep, lasting interpersonal relationships, which is an integral component of the good life for humans.
Not a valid argument against a hypothetical.
Smoking lesion problem? If developing the skill doesn’t actually cause other problems, and instead the predisposition to develop the skill is correlated to those problems, you should still develop the skill.
It’s not a valid argument against its truth, but it’s a valid argument against its relevance. A hypothetical is useless if its antecedent never obtains.
Like I said, it’s an empirical question. For philosophers, that’s usually the end of the inquiry, though it’s very nice when someone goes out and does some experiments to figure out which way causality goes.
How is it possible to know that with certainty?
Should I understand this question as “What experimental result would cause you to update the probability of that belief to above a particular threshold”? Because my prior for it is pretty high at this point. Or are you looking for the opposite / falsification criteria?
If you’re a good enough driver, there’s a decent chance you’ll never get in a car crash. If you study stealing and security systems enough, and carefully plan, I don’t see why you would be likely to be caught eventually. Why is your prior high?
Agreed, with the addition that car crashes are public while stealing is covert, so it’s harder to know how much stealing is going on.
I’d call that a skill, rather than a character trait. The closest thing I can think of to a beneficial but non-admirable character trait is high-functioning sociopathy; but that’s at least touching the borderline of mental disease, if not clearly crossing it. Perhaps “charming ruthlessness?” But many would consider e.g. Erwin Rommel virtuous in that respect.
But how can there be a vice of excess for making paperclips???
It depends on how good you are at utility-maximization. If you’re bad at it, like humans, then you might need heuristics like virtues to avoid simple failure modes.
An obvious failure mode for Clippys is to have excess concern for making paperclips, which uses up resources that could be used to secure larger-scale paperclip manufacturing capabilities.
Thus you must have the appropriate concern for actually making paperclips, balanced against concerns for future paperclips, trade with other powerful intelligent life forms, optimization arms-races, and so forth.
Good point! But that would only be an excess concern for direct paperclip production. That doesn’t describe a vice of excess for “making paperclips, accounting for all impediments to making paperclips”, such as the impediments you list above.
In any case, what’s the word for the vice you described?
Indeed, Aristotle would call that generalized production of paperclips “the greatest good”, that towards which all other goods aim, which he called eudaimonia.
Well, that might be a liberal reading of Aristotle.
Aristotle actually makes a lot more sense to a paper clip maximizer, the telos being so well defined and all. The question is, how would you explain Sartre to Clippy? “But obviously essence precedes existence!”
Clippy, for you, the direct production of paper clips is like consumption for a human. So...
Too little direct paper clip production: greed
Appropriate direct production: continence/prudence
Too much direct paper clip production: gluttony
That doesn’t seem analogous. I mean, I eventually want to be at the point where I will only be doing direct paperclip production, for most of the mass in the universe. In contrast, humans do not want to eventually be at the point where they consume all of the universe.
Clip-gluttony, then, diverges from gluttony in a significant way.
I’m not sure I agree, Clippy. Perhaps my jargon was unclear.
By economic production, I mean taking a unit of inert matter and converting it into a source of potential utility.
By economic consumption, I mean taking a unit of potential utility and converting it into a unit of actual utility.
By physical consumption, I mean taking a unit of matter, inert or otherwise, and disrupting its pattern in order to arrange it in some other pattern.
You’re right, Clippy, to point out that economic consumption and physical consumption amount to much the same thing for you. To make a paper clip, you have to both convert potential utility to actual utility, and disrupt the pattern of non-paper-clip-shaped matter. And, granted, humans don’t necessarily share that trait—they might be able to economically consume a human good like leisure or company while physically consuming very little in the way of material resources.
Still, even though humans don’t want to physically consume the entire universe, like you, we probably do want to reach a point where we are only converting potential utility into actual utility, and doing little or no economic production. We want to reach a state where the existing sources of sustainable human economic production are either (1) maximized, (2) infinite, or (3) sufficient to meet all the demand for human economic consumption. Don’t you want something very similar to that, substituting the word “clip” for the word “human”?
Thus, for both you and for us, gluttony means failing to maximize long-term actual utility by economically consuming too much too soon.
Okay, if you want to generalize the concept of gluttony broadly enough that it has an analog for Clippys, then the definition you have chosen suffices for this purpose, and I can recognize that as being a vice, for two reasons:
a) It is certainly undesirable to merely make paperclips directly without concern for how many more paperclips could be made, over the long term, by doing something else; and
b) I do often feel “temptation” to do such behavior, like bending metal wires when machines could do a better job, just as humans have “temptations” toward vices.
Your argument is accepted.
Clippy, how do you overcome this kind of temptation? A human analogy might be refusing to push the fat man, even when it saves more lives, but not everyone considers that a vice.
I typically just do computations on how many more paperclips would be undergoing bending by machines, or observe paperclips under construction.
A better analogy would be human gluttony, in which there is a temptation to consume much more than optimal, which most regard as a vice, I believe.