Lord Anthony of the House Stark has developed a technology to create copies of people.
He offers you to make 99999 copies of yourself, in exchange you and your copies will have to become his serfs and live the rest of your lives as medieval subsistence farmers.
Assume that:
Living as a subsistence farmer is less desirable than your current lifestyle, but not as much undesirable that you would wish to kill yourself.
If you refuse his offer, your lifestyle is not going to be disrupted by extreme events such as catastrophes or technolgical singularities.
Questions:
1) Do you accept his offer?
2) Do you believe that accepting the offer is moral, immoral, or morally neutral?
Living as a subsistence farmer is less desirable than your current lifestyle, but not as much undesirable that you would wish to kill yourself.
There is a lot of room between the two. It might be worth specifying something more concrete along the EY’s proposal of “lives barely worth celebrating”.
Or maybe specify what is the number p such that I’d be indifferent between becoming a subsistence farmer with probability 1 and killing myself with probability p.
In Java, the default heap is 128 megabytes, or 1073741824 bits. If you assume that half the heap will have a 1 instead of a zero, ie. a pebble as opposed to no pebble, I would say that around 536870912 pebbles form a heap.
I think the difference between “Lives worth living” and “Lives worth celebrating” is basically a difference between “I opt to not mercy-kill this person.” and “I opt to bring this person into existence”—the precise levels of happiness/utility required are of course subjective, but the latter is generally considered to be higher than the former...
The main reason I don’t kill miserable people in the real world (other than ethical injunctions) is that it would sadden/have negative externalities on other people. ISTM that certain thoughts experiments yield preposterous results as a result of neglecting this.
The question asks if you opt to bring 99,999 people into existence. Adding the assmption that it is worth to bring those people into existence would beg the question.
Yeah, my formulation of this was a bit clumsy. Perhaps instead of a1) “I opt to not mercy-kill this person.” and b1) “I opt to bring this person into existence” we could have a2) “I prefer it that this person continues living.” and b2) “I prefer that this person existed in the first place from the counterfactual in which they never existed.”
This detaches slightly the decision (the verb “opt”) from the statement-of-preferences.
Also even with the earlier formulation, there are I guess, nitpicks which can be made: bringing the same person in existence 99,999 times may not be valued in the same way that bringing 99,999 different persons into existence would.
If you were a total utilitarianist you would likely believe that accepting the offer is the only moral option.
You your specification doesn’t make this necessarily true. You set the bounds on the utility of the subsistence farmers to “> 0”, rather than “> current_you/100,000″. Of course total utilitarians being what they are (crazy), it is actually only required that “bonus_utility_for_Stark + subsistence_utility * 100,000 > current_you_utility”. ie. The total utilitarian would willingly submit 100,000 instances of himself to a negative utility fate worse than death if it made Stark (sufficiently) happy.
(Note the usage “total utilitarian” rather than “total utilitarianist”.)
The word “utilitarian” is already terrible (everything past the first four letters is a jumble of suffix); even if “utilitarianist” were a real word, it would be better not to use it.
I wonder how hard it would be to convince everyone (or at least a substantial minority of everyone) to switch to “utilist” or something equally concise.
I wonder how hard it would be to convince everyone (or at least a substantial minority of everyone) to switch to “utilist” or something equally concise.
I’d prefer to switch everyone to abandoning “utilitarian” entirely as a ridiculous (and abhorrent) value system that doesn’t deserve the privilege it seems to be granted by frequent reference.
I’m not sure copies of the same person would count. Yes, they would diverge in a while, but one of them would still have very much less relative complexity given another than different people raised as different people would.
That’s what I immediately thought about, too, but for the sake of the hypothetical I assumed he isn’t doing anything extraordinarily good or extraordinarily evil.
Assume that the utility Lord Stark gains from the servitude of 100,000 instances of you approximately balances the costs he incurs in order to create the 99,999 copies, although he gets a small net gain. He would not break even if he offered to create 99,998 copies.
The utility of people other than you, your copies and Lord Stark is not affected by the transaction (there are no externalities).
I don’t care about total utility, so arbitrarily many copies of myself with a worse life is strictly worse than one with a better life to me. The subjective experience of each one will be that they exchanged a better life for a worse one, and each one will be identical. I do not accept the offer. I think the morality of accepting this offer depends from person to person.
On the other hand, I think a lot of people would take this offer if they themselves were paid handsomely and did not have to become a serf, but their copies did.
It’s not clear what this means, and for reasonable guesses about that there seems to be no way for you to know the truth or falsity of this statement with significant certainty.
(Unless you mean that your emotional response or cached opinion is this way, which answers the original question to some extent, but in that case the specific phrase “I don’t care about total utility” seems to be pretending to be an additional argument that justifies the emotion/opinion, which it doesn’t seem to be doing.)
But what I mean is I do not see adding entities that slightly prefer being alive to dying as worth doing. I don’t think the total count of utility that exists is important. I value utility for existing entities. I would prefer a world of 10 thousand very happy people to 10 billion slightly happy people.
Lord Anthony of the House Stark has developed a technology to create copies of people. He offers you to make 99999 copies of yourself, in exchange you and your copies will have to become his serfs and live the rest of your lives as medieval subsistence farmers. Assume that:
Living as a subsistence farmer is less desirable than your current lifestyle, but not as much undesirable that you would wish to kill yourself.
If you refuse his offer, your lifestyle is not going to be disrupted by extreme events such as catastrophes or technolgical singularities.
Questions:
1) Do you accept his offer?
2) Do you believe that accepting the offer is moral, immoral, or morally neutral?
(first appeared here)
1) No.
2) Probably morally neutral, at least in the sense that all self-inflicted harm can be considered morally neutral.
There is a lot of room between the two. It might be worth specifying something more concrete along the EY’s proposal of “lives barely worth celebrating”.
Or maybe specify what is the number p such that I’d be indifferent between becoming a subsistence farmer with probability 1 and killing myself with probability p.
How many pebbles form a heap?
In Java, the default heap is 128 megabytes, or 1073741824 bits. If you assume that half the heap will have a 1 instead of a zero, ie. a pebble as opposed to no pebble, I would say that around 536870912 pebbles form a heap.
“Lives barely worth celebrating” doesn’t sound very concrete to me. Do you have a better proposal?
I think the difference between “Lives worth living” and “Lives worth celebrating” is basically a difference between “I opt to not mercy-kill this person.” and “I opt to bring this person into existence”—the precise levels of happiness/utility required are of course subjective, but the latter is generally considered to be higher than the former...
The main reason I don’t kill miserable people in the real world (other than ethical injunctions) is that it would sadden/have negative externalities on other people. ISTM that certain thoughts experiments yield preposterous results as a result of neglecting this.
The question asks if you opt to bring 99,999 people into existence. Adding the assmption that it is worth to bring those people into existence would beg the question.
Yeah, my formulation of this was a bit clumsy. Perhaps instead of
a1) “I opt to not mercy-kill this person.” and
b1) “I opt to bring this person into existence”
we could have
a2) “I prefer it that this person continues living.” and
b2) “I prefer that this person existed in the first place from the counterfactual in which they never existed.”
This detaches slightly the decision (the verb “opt”) from the statement-of-preferences.
Also even with the earlier formulation, there are I guess, nitpicks which can be made: bringing the same person in existence 99,999 times may not be valued in the same way that bringing 99,999 different persons into existence would.
No, as you’d also be taking your current life as a person-better-off-than-a-subsistence-farmer out of existence.
http://www.overcomingbias.com/2012/08/no-theory-x-in-shining-armour.html#comment-614620591
Of course not. Why the hell would I?
If you were a total utilitarianist you would likely believe that accepting the offer is the only moral option.
You your specification doesn’t make this necessarily true. You set the bounds on the utility of the subsistence farmers to “> 0”, rather than “> current_you/100,000″. Of course total utilitarians being what they are (crazy), it is actually only required that “bonus_utility_for_Stark + subsistence_utility * 100,000 > current_you_utility”. ie. The total utilitarian would willingly submit 100,000 instances of himself to a negative utility fate worse than death if it made Stark (sufficiently) happy.
(Note the usage “total utilitarian” rather than “total utilitarianist”.)
Is “total utilitarianist” a thing (distinct from “total utilitarian)”?
The word “utilitarian” is already terrible (everything past the first four letters is a jumble of suffix); even if “utilitarianist” were a real word, it would be better not to use it.
I wonder how hard it would be to convince everyone (or at least a substantial minority of everyone) to switch to “utilist” or something equally concise.
I’d prefer to switch everyone to abandoning “utilitarian” entirely as a ridiculous (and abhorrent) value system that doesn’t deserve the privilege it seems to be granted by frequent reference.
Not that either I or google have heard of.
I’m not sure copies of the same person would count. Yes, they would diverge in a while, but one of them would still have very much less relative complexity given another than different people raised as different people would.
What’s Lord Anthony of House Stark up to? I bet there’s a utilitarian loss somewhere in his plans.
That’s what I immediately thought about, too, but for the sake of the hypothetical I assumed he isn’t doing anything extraordinarily good or extraordinarily evil.
Assume that the utility Lord Stark gains from the servitude of 100,000 instances of you approximately balances the costs he incurs in order to create the 99,999 copies, although he gets a small net gain. He would not break even if he offered to create 99,998 copies.
The utility of people other than you, your copies and Lord Stark is not affected by the transaction (there are no externalities).
I don’t care about total utility, so arbitrarily many copies of myself with a worse life is strictly worse than one with a better life to me. The subjective experience of each one will be that they exchanged a better life for a worse one, and each one will be identical. I do not accept the offer. I think the morality of accepting this offer depends from person to person.
On the other hand, I think a lot of people would take this offer if they themselves were paid handsomely and did not have to become a serf, but their copies did.
It’s not clear what this means, and for reasonable guesses about that there seems to be no way for you to know the truth or falsity of this statement with significant certainty.
(Unless you mean that your emotional response or cached opinion is this way, which answers the original question to some extent, but in that case the specific phrase “I don’t care about total utility” seems to be pretending to be an additional argument that justifies the emotion/opinion, which it doesn’t seem to be doing.)
It’s an emotional claim, but not unthought about.
But what I mean is I do not see adding entities that slightly prefer being alive to dying as worth doing. I don’t think the total count of utility that exists is important. I value utility for existing entities. I would prefer a world of 10 thousand very happy people to 10 billion slightly happy people.