One of the key aspects of this theory is that it does not necessarily rate the welfare of creatures with simple values as unimportant. On the contrary, it considers it good for their welfare to be increased and bad for their welfare to be decreased. Because of this, it implies that we ought to avoid creating such creatures in the first place, so it is not necessary to divert resources from creatures with humane values in order to increase their welfare.
If you assign any positive utility at all, no matter how small, to creating happy low-complexity life, you end up having to create lots of happy viruses (they are happy when they can replicate). If you put a no-value threshold somewhere below humans, you run into other issues, like “it’s OK to torture a cat”.
Allowing yourself to be swayed by thought experiments of the form “If you value x at all, then if offered enough x, you would give up y, and you value one y so much more than one x!” is a recipe for letting scope insensitivity rewrite your utility function.
Be careful about making an emotional judgement on numbers bigger than you can intuitively grasp. If possible, scale down the numbers involved (Instead of 1 million x vs 1 thousand y, imagine 1 thousand x vs 1 y) before imagining each alternative as viscerally as possible.
In my own experience, not estimating a number, just thinking “Well, it’s really big” basically guarantees I will not have the proper emotional response.
You would only create these viruses if the total utility of the viruses you can create with the resources at your disposal exceeds the utility of the humans you could make with these same resources. For instance, if you give a utility of 1 to a steel paperclip weighing 1 gram, then assuming a simple additive model (which I wouldn’t, but that’s besides the point) making one metric ton of paperclips has an utility of 1,000,000. If you give an utility of 1,000,000,000 to a steel sculpture weighing a ton, it follows that you will never make any paperclips unless you have less than a ton of iron. You will always make the sculpture, because it gives 1,000 times the utility for the exact same resources.
You would only create these viruses if the total utility of the viruses you can create with the resources at your disposal exceeds the utility of the humans you could make with these same resources.
True, if you start with resource constraints, you can rig the utility scaling to overweigh more intelligent life. However, if you don’t cheat and assign the weights before considering constraints, there is a large chance that the balance will tip the other way. Or if there is no obvious competition for resources. If you value creating at least mildly happy life, you ought to consider working on, say, silicon-based life, which does not compete with carbon-based life. Or maybe on using all this stored carbon in the ocean to create more plankton. In other words, it is easy to find a case where preassigned utilities lead to a runaway simple life creation imperative.
I have tried to get around this by creating a two step process. When considering whether or not to create a creature the first step is asking “Does it have humane values?” The second step is asking “Will it live a good life, without excessively harming others in the process?” If the answer to either of those questions is “no,” then it is not good to create such a creature.
Now, it doesn’t quite end there. If the benefits to others are sufficiently large then the goodness of creating a creature that fails the process may outweigh the badness of creating it. Creating an animal without humane values may still be good if such a creature provides companionship or service to a human, and the value of that outweighs the cost of caring for it. However, once such a creature is created we have a responsibility to include it in utility calculations along with everyone else. We have to make sure it lives a good life, unless there’s some other highly pressing concern that outweighs it in our utility calculations.
Now, obviously creating a person with humane values who will not live a good life may still be a good thing as well, if they invent a new vaccine or something like that.
I think this process can avoid mandating we create viruses and mice, while also preserving our intuition that torturing cats is bad.
If you assign any positive utility at all, no matter how small, to creating happy low-complexity life, you end up having to create lots of happy viruses (they are happy when they can replicate).
You are using a rather expansive definition of “happy.” I consider happiness to be a certain mental process that only occurs in the brains of sufficiently complex creatures. I consider it to not be synonymous with utility, which includes both happiness, and other desires people have.
If you assign any positive utility at all, no matter how small, to creating happy low-complexity life, you end up having to create lots of happy viruses (they are happy when they can replicate). If you put a no-value threshold somewhere below humans, you run into other issues, like “it’s OK to torture a cat”.
Allowing yourself to be swayed by thought experiments of the form “If you value x at all, then if offered enough x, you would give up y, and you value one y so much more than one x!” is a recipe for letting scope insensitivity rewrite your utility function.
Be careful about making an emotional judgement on numbers bigger than you can intuitively grasp. If possible, scale down the numbers involved (Instead of 1 million x vs 1 thousand y, imagine 1 thousand x vs 1 y) before imagining each alternative as viscerally as possible.
In my own experience, not estimating a number, just thinking “Well, it’s really big” basically guarantees I will not have the proper emotional response.
You would only create these viruses if the total utility of the viruses you can create with the resources at your disposal exceeds the utility of the humans you could make with these same resources. For instance, if you give a utility of 1 to a steel paperclip weighing 1 gram, then assuming a simple additive model (which I wouldn’t, but that’s besides the point) making one metric ton of paperclips has an utility of 1,000,000. If you give an utility of 1,000,000,000 to a steel sculpture weighing a ton, it follows that you will never make any paperclips unless you have less than a ton of iron. You will always make the sculpture, because it gives 1,000 times the utility for the exact same resources.
True, if you start with resource constraints, you can rig the utility scaling to overweigh more intelligent life. However, if you don’t cheat and assign the weights before considering constraints, there is a large chance that the balance will tip the other way. Or if there is no obvious competition for resources. If you value creating at least mildly happy life, you ought to consider working on, say, silicon-based life, which does not compete with carbon-based life. Or maybe on using all this stored carbon in the ocean to create more plankton. In other words, it is easy to find a case where preassigned utilities lead to a runaway simple life creation imperative.
I have tried to get around this by creating a two step process. When considering whether or not to create a creature the first step is asking “Does it have humane values?” The second step is asking “Will it live a good life, without excessively harming others in the process?” If the answer to either of those questions is “no,” then it is not good to create such a creature.
Now, it doesn’t quite end there. If the benefits to others are sufficiently large then the goodness of creating a creature that fails the process may outweigh the badness of creating it. Creating an animal without humane values may still be good if such a creature provides companionship or service to a human, and the value of that outweighs the cost of caring for it. However, once such a creature is created we have a responsibility to include it in utility calculations along with everyone else. We have to make sure it lives a good life, unless there’s some other highly pressing concern that outweighs it in our utility calculations.
Now, obviously creating a person with humane values who will not live a good life may still be a good thing as well, if they invent a new vaccine or something like that.
I think this process can avoid mandating we create viruses and mice, while also preserving our intuition that torturing cats is bad.
You are using a rather expansive definition of “happy.” I consider happiness to be a certain mental process that only occurs in the brains of sufficiently complex creatures. I consider it to not be synonymous with utility, which includes both happiness, and other desires people have.